California, the world’s seventh-largest economy and a tech powerhouse, is too influential on tech policy to get artificial intelligence (AI) regulation wrong. Unfortunately, Assembly Bill 1018, which is pending before the state legislature, risks doing just that.
With the advent of generative AI and the domino effect of possibilities it creates through new applications like agentic services and others, every sector of the economy is figuring out how it can benefit from this new wave of software-driven transformation.
That’s why it’s critical that legislators in California reverse course on Assembly Bill 1018, the Automated Decisions Safety Act.
The Business Software Alliance (BSA) strongly supports meaningful safeguards for high-risk uses of AI, and we have worked to develop them with governments in Europe, Asia, and across numerous US states. Because AB 1018 is built on a misunderstanding of how AI systems are developed and used throughout the economy, it could unintentionally put California on a course to strangle the dynamic innovation that’s underway.
The stakes of getting AI regulation wrong are too high for California to proceed with a flawed approach.
Broad and Vague Scope: Regulating Far More Than High-Risk AI
AB 1018 seeks to regulate “high-risk” uses of AI, meaning AI systems that decide whether consumers are granted or denied important benefits and opportunities, like employment, education, housing, and health care. While this is a key area to address, the bill’s definitions are so broad that they risk roping in uses that extend beyond high-risk, even mundane uses of AI. For instance, the bill could impact everyday tools that consumers enjoy, such as route recommendations in mapping applications, as it could be viewed as affecting the “quality” of consumers’ transportation.
This broad and vague scope also means the bill can reach far beyond technology companies. Increasingly, businesses in every industry can modify and incorporate AI tools for use in different contexts on their own. This has been happening for years, and generative AI is making it easier and unlocking more opportunities. Given how AI is used throughout the economy, the bill would subject a staggering range of technologies and businesses across industry sectors to new legal risks and compliance obligations.
The problems with AB 1018’s scope are that it:
- Includes statistical modeling and data analytics that play no meaningful role in decision-making;
- Includes tools that are “used” to “assist” human decision-making, sweeping in a broad range of technologies that do not meaningfully impact decision-making; and
- Will encompass many decisions that are not the focus of the bill, by including vague terms that are difficult to measure, like “quality” and “accessibility” of important opportunities and benefits.
AB 1018 falls short of its own goals by muddying the waters for everyone trying to build AI responsibly. The bill should focus on AI systems that actually determine whether a consumer is granted or denied critical benefits and opportunities.
Misaligned Responsibilities: Misunderstanding the AI Value Chain
The AI supply chain is complex, varied, and rapidly evolving. It also involves multiple players: One company may develop a general-purpose system, others may modify it with different applications and tools along the way, and another company can modify it before deploying it directly to consumers. Each company developing or deploying high-risk AI systems should have role-based responsibilities for the risks they introduce, and it’s essential that AI legislation accurately reflects these different roles.
AB 1018 distorts the roles and responsibilities of AI developers and deployers and will not work in practice. AB 1018’s over-simplification of the AI value chain is concerning because it treats companies as developers based on how others use their technology — even when they did not develop it for a high-risk purpose. For example:
- A company that makes statistical modeling software might sell it to businesses across the economy. But AB 1018 could subject this company to a developer’s obligations if one customer uses the software to predict how many employees they can hire in the next year. The company won’t know its customers are using the software in that way — and shouldn’t be required to assess their decisions.
- A more complex but increasingly common scenario might include five different companies. Company 1 develops a general-purpose AI model. Company 2 creates a software development platform that allows other businesses to build applications using the general-purpose AI model. Company 3 uses the software development platform to build an AI-powered application specifically for the health care industry. Company 4 modifies the application to assist hospitals in deciding how to triage patients. Company 5, a hospital, may modify the application for its particular patients. AB 1018 risks treating all five companies as developers of a high-risk AI system, even though they did not intentionally develop an AI tool for a high-risk purpose.
The regime the bill imagines is unworkable because it asks developers to consider the risks of an AI system as it is finally deployed. The reality is, though, that developers cannot anticipate the new uses that other developers and deployers create with the system. In other words, the bill suggests companies should be able to evaluate uses they’re not aware of because other companies are coming up with them. Businesses can’t evaluate what they don’t know.
This misunderstanding of how AI is developed and used risks crippling the AI ecosystem in California. This level of unworkability can create a climate where companies will think twice about developing or offering AI in California — and where potential users of AI will think twice about adopting it. The result? Less innovation, lost investment to other states, less adoption of AI, and missed opportunities for Californians to benefit from AI tools that improve lives.
The bill should instead reflect the reality of how AI systems and tools are developed and used. All businesses developing or deploying high-risk AI should each be responsible for the risks they introduce to the AI system, and not those of another company.
Third-Party Audits: Premature and Problematic
Accountability matters, and that’s why BSA has recommended that both developers and deployers of high-risk AI systems conduct impact assessments and implement risk management frameworks. AB 1018, however, takes the extreme approach of mandating third-party audits for the broad set of companies described above.
It is unclear how these audits will work, though, because they require developers and companies that modify covered AI systems to assess their compliance with the bill. It’s also unclear how companies that design statistical modeling, data analytics, or AI tools with applications across industry sectors will assess their own compliance if a deployer uses their general-purpose tool in a high-risk way. Not to mention how they’ll work with a third-party auditor to understand how other companies are using their tools and whether the other companies’ actions create risks covered by the bill.
Even if the bill created workable obligations for developers, the requirements for third-party audits would still be concerning because:
- The AI audit industry is nascent, with no common standards or professional frameworks in place for AI audits, and no guarantee that the ecosystem will sufficiently evolve by 2030 when AI audits are required;
- The bill circumvents international processes underway to create globally recognized AI standards by requiring third-party audits for its state-specific obligations; and
- The bill ignores well-established accountability tools, like impact assessments and risk management programs, in favor of establishing a costly and unproven auditing regime.
AB 1018 would go far beyond any existing regulation to subject AI to an auditing regime where no common standards exist and the uses are unknown. On the other hand, impact assessments and risk management frameworks for companies that develop and deploy high-risk AI systems are practically feasible today and promote accountability.
One Bill, Many Enforcers: A Recipe for Confusion
AB 1018 would be enforced by a patchwork of state and local authorities, plus private lawsuits under existing state laws. Authorizing so many different enforcement mechanisms could result in inconsistent — or worse, conflicting — interpretations if courts reach different conclusions about how companies are to apply the bill’s obligations.
Any AI legislation of this nature should provide the Attorney General with exclusive enforcement authority to promote clarity, consistency, and effective oversight.
A Better Way Forward
California has long been a global leader in advancing technology. Now is the time to establish practical, targeted, and enforceable standards, and to avoid legislation that’s out of sync with how AI is actually built, deployed, and used. AB 1018’s overly broad scope, mismatched obligations, and premature audit requirements would create more confusion than clarity — and could stall meaningful progress in AI development and adoption.
There’s a better path. Legislators can zero in on truly high-impact uses of AI, define roles in line with how technology flows through the economy, and promote accountability tools that can be used today. We look forward to working together to create a more effective approach.
For more information on BSA’s AI policy positions, see www.bsa.org/ai.