Tweet Artificial Intelligence, Industry

Introducing BSA’s AI Bias Risk Management Framework

The Framework is a playbook that organizations can use to enhance trust in their AI systems through risk management processes that promote fairness, transparency, and accountability. Read More >>

As the pace of digital transformation continues to accelerate, businesses around the world are looking to artificial intelligence (AI) to improve their competitiveness, enhance their value proposition, and increase their capacity to make data-informed decisions. While the adoption of AI can unquestionably be a force for good, it can also create unique risks. Because AI is trained on data from the past, there is a risk that AI systems may replicate and potentially further entrench historical inequities. As AI is integrated into business processes that can have real-world impacts on people’s lives, there is an urgent need to ensure that these systems are being designed and deployed in ways that account for the needs of all communities that may encounter them.

In a world where many of the most important aspects of our lives are mediated by technology, the consequences of AI bias can be catastrophic. There is perhaps no greater illustration of the stakes of AI bias than the recent revelation that US hospitals had been relying on a deeply flawed AI system to identify patients that required urgent care. A team of researchers that examined the system demonstrated that “people who self-identified as black were generally assigned lower risk scores than equally sick white people,” and as a result the system “was less likely to refer black people than white people who were equally sick to [programs] that aim to improve care for patients with complex medical needs.” The research team concluded that bias arose because the system sought to predict a patient’s healthcare needs using historical data about their actual healthcare costs. Unfortunately, because minority patients have historically had less access to healthcare, using healthcare costs as a proxy for their current needs painted an inaccurate picture and led to dangerously biased outcomes.

The risks of AI bias are considerable, but they are not insurmountable. There is a deep body of research about what organizations can do to mitigate the risk of AI bias. And today, we seek to contribute to the conversation through the publication of the BSA Framework to Build Trust in AI, a new risk management framework to help guide the development and use of AI so that the risks of bias are minimized at every step of a system’s lifecycle. Built on a vast body of research and informed by the experience of leading AI developers, the Framework:

  • Outlines an impact assessment process for identifying risks of bias throughout an AI system’s lifecycle;
  • Highlights existing best practices, technical tools, and resources for mitigating specific AI bias risks when they emerge; and,
  • Sets out key corporate governance structures, processes, and safeguards that are needed to implement and support an effective AI risk management program.

The Framework is a playbook that organizations can use to enhance trust in their AI systems through risk management processes that promote fairness, transparency, and accountability. The foundation of the Framework is its detailed methodology for performing impact assessments that help ensure that critical decisions are documented and that an organization’s product development team, its compliance personnel, and senior leadership are aligned on the appropriate steps for mitigating risks of bias when they are identified.

We hope that the Framework will also be a useful tool for policymakers as they consider the future of AI regulation. Because of the profound impact that AI can have on people’s lives, the public should be assured that high-risk AI systems are being designed and deployed responsibly. Requiring organizations to perform impact assessments on high-risk systems is an important mechanism that can help create strong market incentives for effective risk management.

Interested in learning more? Read the BSA Framework to Build Trust in AI here.

Author:

Christian Troncoso is Senior Director, Policy for BSA | The Software Alliance. He works with members to develop BSA policy on a range of legal, legislative, and regulatory issues, including copyright, cybersecurity and privacy. Prior to joining BSA, he served as Senior Counsel for the Entertainment Software Association, where he advocated on behalf of video game publishers in the United States and before foreign governments. Troncoso earned an LL.M. with a focus on intellectual property from The George Washington University, a J.D. from the University of Denver, and a bachelor’s degree from the University of Richmond. He is based in BSA’s Washington, DC, office.

Leave a Reply

Your email address will not be published. Required fields are marked *

four × two =