Artificial intelligence (AI) has been adopted by organizations in almost every industry to improve healthcare outcomes, advance educational opportunities, create more robust accessibility tools, strengthen cybersecurity, and spur economic growth. But as this technology becomes more widespread, there is a growing risk that AI systems trained on data from the past will further entrench existing inequalities. BSA supports legislation that would require organizations to perform impact assessments prior to deploying high-risk AI systems. To advance these conversations, we recently launched the BSA Framework to Build Trust in AI, a detailed methodology for performing impact assessments that can help organizations responsibly manage the risk of bias throughout an AI system’s lifecycle.
Because the opportunities and challenges of AI are ultimately global in nature, it is vital that industry and policymakers around the world work together to help create enduring policy solutions. To that end, last week we brought together leaders from both sides of the Atlantic to discuss the vital issues surrounding AI at our recent event, “Confronting Bias: A Path Toward Transatlantic Cooperation on AI Policy.”
I was pleased to moderate the first portion of our event, a fireside panel featuring US Secretary of Commerce Gina Raimondo and European Commission Executive Vice-President Margrethe Vestager. In their first meeting since the formal announcement of a US-EU Trade and Technology Council, both leaders emphasized the urgent need for democracies with shared values to align on policies that will facilitate the responsible deployment of AI and promote innovation.
“I think it is vital that America and the EU write the rules of the road as it relates to AI … our challenge is how to regulate and set standards for emerging technology in a way that allows innovation to flourish, but that keeps people safe, secures privacy, and holds true to our shared democratic values,” said Secretary Raimondo. Applauding the release of the BSA Framework to Build Trust in AI, Secretary Raimondo added that engagement with industry stakeholders will be key to this process, emphasizing “It can’t be that government folks sit around and create the rules of the road. Industry needs to lean in, and partner with us, and give us feedback, and engage with us, so we can accomplish what we need to do and not stifle innovation and job creation.”
Executive Vice-President Vestager also highlighted the importance of private and public sector collaboration to build trust in AI, noting that BSA’s Framework to Build Trust in AI “gives some really thoughtful examples of what it means to work against bias. Both the traps one can fall into, but also how one can avoid that.” EVP Vestager then underscored the importance of orienting AI legislation towards high-risk applications, noting that EU regulations “would leave the huge majority of AI applications without a regulatory touch, if there was not risk to something fundamental in the use case.”
After our fireside chat, BSA’s Christian Troncoso moderated an industry panel featuring Paula Goldman, Chief Ethical and Humane Use Officer at Salesforce; Dana Rao, Executive Vice President and General Counsel at Adobe; Rich Sauer, Chief Legal Officer and Head of Corporate Affairs at Workday; and Kush R. Varshney, Distinguished Research Staff Member and Manager at IBM Research.
Dana Rao began the conversation by discussing Adobe’s work to combat bias in AI by operationalizing its approach to AI ethics. Core to the journey, he noted, was ensuring that Adobe’s approach to AI was aligned with its values while still pushing the boundaries of innovation. He encouraged other organizations to come up with their own approach for integrating ethics into their AI development processes: “Every company is different. I would encourage everyone out there to really reflect on your business, model, your goals, who you are, and then make sure that these principles will accurately reflect those.”
In her role as the first-ever Chief Ethical and Humane Use Officer at Salesforce, Paula Goldman also spends a lot of time thinking about how to create AI systems that are ethical by design. As part of their development process, employees at Salesforce are encouraged to engage in “consequence scanning” by looking at a product and asking: what are you trying to accomplish? What are the intended consequences? What might be some of the unintended consequences? “It’s impossible to truly take out bias from AI, just like it’s impossible to take out bias from human beings completely,” Goldman said, adding, “It’s not just getting to what’s good enough. It’s continual improvement and continual shared learning.”
In its recent white paper on AI policy, Workday cautioned about the possible rise of a “trust gap” in AI, and during the panel discussion Rich Sauer explained how companies and governments each have important roles to play in closing the gap. “We think we can do a lot of things to earn our customers’ trust, but we think it will be additionally helpful if customers know Workday is doing these things because they’re required to by law, and there’s a regulator looking over their shoulder,” Sauer noted. In the current climate, it’s essential that customers know that they can trust the technology they’re using.
IBM Research’s Kush R. Varshney then offered some expert insight into what AI regulations and BSA’s Framework mean for the researchers and developers creating AI systems. Clear guidance from government “will spur more innovation and will actually be something for us to get excited about as researchers and developers coming up with solutions,” Varshney said. The best way to make progress, he added, is to consider everything from the early development to end use as part of one life cycle, and work to mitigate risk in every part of that life cycle.
Interested in learning more? You can watch the fireside chat here and the panel discussion here. If you’d like to read more about mitigating the risks of bias in AI, find BSA’s Framework to Build Trust in AI here.