Tweet Artificial Intelligence

How Shared Responsibilities Keep AI Accountability Aligned With Innovation

Policymakers and the artificial intelligence (AI) industry share a common goal: Ensuring AI is developed and used responsibly. The best way to achieve that is by setting clear expectations for responsible behavior, through clear obligations that hold companies accountable for what they can control. Read More>>

Responsible AI Depends on Responsible Actions

Policymakers and the artificial intelligence (AI) industry share a common goal: Ensuring AI is developed and used responsibly. The best way to achieve that is by setting clear expectations for responsible behavior, through clear obligations that hold companies accountable for what they can control.

State lawmakers in the United States have shown growing interest in regulating AI, and ensuring AI policies incorporate workable liability mechanisms is critical to encouraging responsible practices for developers, integrators, and deployers of AI tools. But misguided efforts to apply a strict liability approach are a poor fit for AI systems, where outcomes depend heavily on how an AI tool is deployed.

AI policies should emphasize responsible conduct by encouraging both developers and deployers to identify and address risks, undertake robust assessments, and act promptly when problems emerge. These policies should also evolve with technology and context, ensuring accountability keeps pace with innovation.

Here’s what policymakers need to know as they consider their approach.

Different Companies Must Take Different Actions

AI policies must reflect the different roles of the varying types of companies that create and use AI tools.

Those companies include developers, who create AI models or systems that can be used in a variety of settings, and deployers, who use AI tools for specific purposes. For example, a bank may act as a deployer when it purchases an AI tool developed by another company and uses it to detect fraudulent transactions. Other types of companies that play a role include integrators that incorporate AI tools into software.

These distinct roles matter because each type of company will know different information, which determines the actions they’ll take to identify and mitigate potential risks. For example, a developer can assess risks involved in training an AI model, but a deployer is best positioned to understand risks arising from how an AI system is actually used.

When laws and regulations blur these roles, one type of company can end up responsible for actions it did not take. That is a bad result for consumers, since they won’t benefit from the protections the law intended to create.

Frameworks for Responsible AI

The Business Software Alliance (BSA) has worked closely with our member companies to identify the specific actions that companies can take to develop and deploy AI tools responsibly. That is why we released our “Framework to Build Trust in AI” more than four years ago. BSA’s framework is similar to the National Institute of Standards and Technology’s “AI Risk Management Framework,” which companies across the economy use to identify and manage potential risks for their organization.

Companies must apply these types of frameworks based on whether they’re developing, integrating, or deploying AI, since the actions they can take depend on their role.

Blurring these roles risks unfairly burdening one party or letting another escape accountability. Clear obligations make it possible for each actor to manage their own risks and to design complementary safety measures throughout the AI lifecycle.

Approaches to Assigning Responsibility in AI Policy

When crafting legislation, several mechanisms are available to policymakers to ensure companies comply with their legal obligations. Not all mechanisms, however, are best suited for AI policy.

The most straightforward approach to ensuring that companies develop and use AI responsibly is to place clear obligations on them, based on their role in the AI value chain, and to hold them liable when they fail to comply. Such an approach creates clarity for businesses in understanding their responsibilities and robust protections for consumers.

At the state level, we’ve seen interest in ensuring companies develop and deploy AI responsibly by assigning them a duty of care. We’ve also seen misguided efforts to shift from that approach to a strict liability regime. Colorado’s recent AI law places a duty of care on both developers and deployers. It also assigns each type of company a set of actions it can take to create a legal presumption that the company meets that duty.

The concept of a “duty of care” is deeply rooted in tort law, which governs civil wrongs and personal injury. Courts frequently impose a duty of care on individuals or organizations that have the power to prevent foreseeable harm to others. For example, drivers must operate their cars safely to avoid injuring pedestrians; a doctor must act as a reasonably competent physician would under similar circumstances; a company must ensure that its products are safe for ordinary use.

These duties are not static rules — they evolve with context, technology, and social expectations. The standard is flexible, focusing on whether an actor took reasonable steps to prevent foreseeable harm given their role, expertise, and resources. That flexibility can promote responsible development and the use of fast-changing technologies like AI, especially when paired with a specific list of actions that companies can take to meet the standard.

Why Other Liability Systems Create Concerns When Applied to AI

Policymakers focused on AI issues have occasionally looked to other liability systems, such as products liability or strict liability. This is problematic, as those systems assign liability based on outcomes rather than conduct.

Under products liability, for instance, a manufacturer can be held liable for harm even if it took all reasonable precautions. That approach is a poor fit for AI systems, where outcomes depend heavily on how an AI tool is deployed. For example, a developer may create an AI system that is well-suited to specific uses, but a deployer might then create significant risks if they use it in other settings. Each business should be held responsible for what it can control.

In contrast, a straightforward approach to liability or duty of care emphasizes responsible behavior — encouraging both developers and deployers to identify and address risks, conduct robust testing, and act promptly when problems emerge.

Shifting to a strict liability approach doesn’t make sense when the risks to consumers vary greatly depending on how a particular technology is used. For example, if a bakery undercooks pies made with eggs and their customers get sick, it wouldn’t make sense to hold the bakery’s egg vendor liable. Policies should create incentives for different actors to act responsibly. That approach creates better outcomes for consumers.

Author:

Craig Albright serves as BSA’s Senior Vice President for US Government Relations. In this role, he leads BSA’s team that drives engagement with Congress, the Administration, and all US states. He’s responsible for developing and implementing advocacy strategy to deliver results on issues across BSA’s policy agenda.

Prior to joining BSA, Albright spent four years as the World Bank Group's Special Representative for the United States, Australia, Canada and New Zealand, managing relations with government officials, private sector executives, think tank academics, civil society leaders and others. Before that, Albright spent more than 12 years in the US government. He served in the White House as Special Assistant to President George W. Bush for Legislative Affairs and Deputy Assistant to Vice President Dick Cheney for Legislative Affairs. In Congress, his positions included Legislative Director and Chief of Staff for former Congressman Joe Knollenberg of Michigan and Chief of Staff for Congresswoman Kay Granger of Texas.

Albright has been identified as one of the Top 100 association lobbyists by The Hill news organization and one of Washington’s Most Influential People by Washingtonian magazine. He is a native of the Detroit area and holds a BA in Economics from Michigan State University.

Leave a Reply

Your email address will not be published. Required fields are marked *

7 + 5 =