Tweet Artificial Intelligence

Workday Chief Legal Officer on How the Partnership Between AI Developers and Deployers Advances AI Governance

Responsible AI is a team activity, and at Workday, we’re committed to doing our part. We’ve invested in the principles, practices, and people behind our Responsible AI program, and we collaborate with government officials in the US and around the world to advance meaningful and workable safeguards on high-risk AI. Read more >>

By Workday Chief Legal Officer & Head of Corporate Affairs, Rich Sauer

Responsible AI is a team activity, and at Workday, we’re committed to doing our part. We’ve invested in the principles, practices, and people behind our Responsible AI program, and we collaborate with government officials in the US and around the world to advance meaningful and workable safeguards on high-risk AI.

As we’ve learned in privacy and data security, rigorous technology governance is a partnership between enterprise software providers – the technology developers – and their customers – the technology deployers. This is also true for AI governance. When AI is developed and used to make consequential decisions, such as hiring, both parties should take steps to manage potential risks.

Why? Because each has a different level of control and visibility into high-risk AI. Developers decide how AI is designed and trained but typically don’t have access to their customers’ data and don’t make consequential decisions. Deployers determine how AI is implemented and used, but typically don’t design AI tools. Like any successful partnership, each must work toward a common goal: advancing trust in these technologies and amplifying human potential. The Future of Privacy Forum’s “Best Practices for AI and Workplace Assessment Technologies” illustrate how the developer-deployer partnership can advance responsible AI in the workplace.

Emerging AI laws and governance frameworks, many of which Workday has publicly supported, recognize the power of this partnership. The European Union’s Artificial Intelligence Act and Colorado’s law on algorithmic discrimination tailor risk management obligations to an organization’s role as a developer, deployer, or both. The NIST AI Risk Management Framework–which Workday has championed and adopted–and a landmark consent decree from the Federal Trade Commission also underscore responsibilities at different stages in the AI life cycle. Only when AI developers and deployers work together can we ensure that these technologies positively impact society.


About the author: 

Rich Sauer is Chief Legal Officer & Head of Corporate Affairs at Workday. He serves as Secretary of BSA’s Board of Directors.

Author:

“AI Policy Solutions” contributors are the industry leaders and experts from enterprise software companies who work with governments and institutions around the world to address artificial intelligence policy. These submissions showcase the pillars outlined in BSA’s recent “Policy Solutions for Building Responsible AI” document by demonstrating how their companies are continuously working toward the responsible adoption of AI technologies.

Leave a Reply

Your email address will not be published. Required fields are marked *

sixteen − 15 =