Tweet Artificial Intelligence, Industry

Confronting AI Bias: A Transatlantic Approach to AI Policy

BSA brought together leaders from both sides of the Atlantic to discuss the vital issues surrounding AI at our event, “Confronting Bias: A Path Toward Transatlantic Cooperation on AI Policy.” Read More >>

Artificial intelligence (AI) has been adopted by organizations in almost every industry to improve healthcare outcomes, advance educational opportunities, create more robust accessibility tools, strengthen cybersecurity, and spur economic growth. But as this technology becomes more widespread, there is a growing risk that AI systems trained on data from the past will further entrench existing inequalities. BSA supports legislation that would require organizations to perform impact assessments prior to deploying high-risk AI systems. To advance these conversations, we recently launched the BSA Framework to Build Trust in AI, a detailed methodology for performing impact assessments that can help organizations responsibly manage the risk of bias throughout an AI system’s lifecycle.

Because the opportunities and challenges of AI are ultimately global in nature, it is vital that industry and policymakers around the world work together to help create enduring policy solutions. To that end, last week we brought together leaders from both sides of the Atlantic to discuss the vital issues surrounding AI at our recent event, “Confronting Bias: A Path Toward Transatlantic Cooperation on AI Policy.”

I was pleased to moderate the first portion of our event, a fireside panel featuring US Secretary of Commerce Gina Raimondo and European Commission Executive Vice-President Margrethe Vestager. In their first meeting since the formal announcement of a US-EU Trade and Technology Council, both leaders emphasized the urgent need for democracies with shared values to align on policies that will facilitate the responsible deployment of AI and promote innovation.

“I think it is vital that America and the EU write the rules of the road as it relates to AI … our challenge is how to regulate and set standards for emerging technology in a way that allows innovation to flourish, but that keeps people safe, secures privacy, and holds true to our shared democratic values,” said Secretary Raimondo. Applauding the release of the BSA Framework to Build Trust in AI, Secretary Raimondo added that engagement with industry stakeholders will be key to this process, emphasizing “It can’t be that government folks sit around and create the rules of the road. Industry needs to lean in, and partner with us, and give us feedback, and engage with us, so we can accomplish what we need to do and not stifle innovation and job creation.”

Executive Vice-President Vestager also highlighted the importance of private and public sector collaboration to build trust in AI, noting that BSA’s Framework to Build Trust in AI “gives some really thoughtful examples of what it means to work against bias. Both the traps one can fall into, but also how one can avoid that.” EVP Vestager then underscored the importance of orienting AI legislation towards high-risk applications, noting that EU regulations “would leave the huge majority of AI applications without a regulatory touch, if there was not risk to something fundamental in the use case.”

After our fireside chat, BSA’s Christian Troncoso moderated an industry panel featuring Paula Goldman, Chief Ethical and Humane Use Officer at Salesforce; Dana Rao, Executive Vice President and General Counsel at Adobe; Rich Sauer, Chief Legal Officer and Head of Corporate Affairs at Workday; and Kush R. Varshney, Distinguished Research Staff Member and Manager at IBM Research.

Dana Rao began the conversation by discussing Adobe’s work to combat bias in AI by operationalizing its approach to AI ethics. Core to the journey, he noted, was ensuring that Adobe’s approach to AI was aligned with its values while still pushing the boundaries of innovation. He encouraged other organizations to come up with their own approach for integrating ethics into their AI development processes: “Every company is different. I would encourage everyone out there to really reflect on your business, model, your goals, who you are, and then make sure that these principles will accurately reflect those.”

In her role as the first-ever Chief Ethical and Humane Use Officer at Salesforce, Paula Goldman also spends a lot of time thinking about how to create AI systems that are ethical by design. As part of their development process, employees at Salesforce are encouraged to engage in “consequence scanning” by looking at a product and asking: what are you trying to accomplish? What are the intended consequences? What might be some of the unintended consequences? “It’s impossible to truly take out bias from AI, just like it’s impossible to take out bias from human beings completely,” Goldman said, adding, “It’s not just getting to what’s good enough. It’s continual improvement and continual shared learning.”

In its recent white paper on AI policy, Workday cautioned about the possible rise of a “trust gap” in AI, and during the panel discussion Rich Sauer explained how companies and governments each have important roles to play in closing the gap. “We think we can do a lot of things to earn our customers’ trust, but we think it will be additionally helpful if customers know Workday is doing these things because they’re required to by law, and there’s a regulator looking over their shoulder,” Sauer noted. In the current climate, it’s essential that customers know that they can trust the technology they’re using.

IBM Research’s Kush R. Varshney then offered some expert insight into what AI regulations and BSA’s Framework mean for the researchers and developers creating AI systems. Clear guidance from government “will spur more innovation and will actually be something for us to get excited about as researchers and developers coming up with solutions,” Varshney said. The best way to make progress, he added, is to consider everything from the early development to end use as part of one life cycle, and work to mitigate risk in every part of that life cycle.

Interested in learning more? You can watch the fireside chat here and the panel discussion here.  If you’d like to read more about mitigating the risks of bias in AI, find BSA’s Framework to Build Trust in AI here.

Author:

Victoria Espinel is a global leader advancing the future of technology innovation.  

As CEO of BSA | The Software Alliance, Victoria has grown the organization’s worldwide presence in over 30 countries, distinguishing BSA as the leader for enterprise software companies on issues including artificial intelligence, privacy, cybersecurity, and digital trade. She launched the Digital Transformation Network and the Global Data Alliance, flagship BSA initiatives to further BSA’s collaboration with 15+ industry sectors globally. Victoria founded Software.org, the enterprise software industry’s nonprofit partner that educates policymakers and the public about the impact of software and careers within the industry. 

Victoria serves on President Biden’s National Artificial Intelligence Advisory Committee (Chair of the International Working Group), served as a member of the President’s USTR Advisory Committee for Trade Policy and Negotiations (ACTPN), and chaired the Future of Software and Society Group at the World Economic Forum. She is a lifetime member of the Council on Foreign Relations. 

 Victoria has testified on multiple occasions before the US Congress, European Parliament, and Japanese Diet. Victoria speaks frequently to groups about AI, cybersecurity, and STEM education, including Latinas in Tech, Girls Rule the Law, the Congressional Staff Hispanic Association, Women’s Congressional Staff Associations, Girls Who Code, EqualAI, CSIS, and numerous academic institutions. She has been featured in a wide range of media outlets, including New York Times, Washington Post, Financial Times, Forbes, C-SPAN, BBC, Bloomberg Business, The New Yorker, and NPR. 

Prior to BSA, Victoria was confirmed by the US Senate to serve as the first White House “IP Czar,” establishing a new office in the White House and advising President Obama on intellectual property. She also served in the Bush Administration as the first chief US trade negotiator for intellectual property and innovation, a role in which she created the office of Intellectual Property and Innovation at USTR and led negotiations with over 70 countries. 

Victoria launched Girls Who Code’s Washington, DC summer program and serves on the Board of Directors for ChIPs, a nonprofit organization advancing women in technology law and policy. 

She holds an LLM from the London School of Economics, a JD from Georgetown University Law School, and a BS in Foreign Service from Georgetown University’s School of Foreign Service. She is a native of Washington, DC, and the proud proprietor of Jewel of the South, a restaurant in New Orleans. 

Leave a Reply

Your email address will not be published. Required fields are marked *

16 − four =