Tweet Artificial Intelligence

Preemption With Precision: How Congress Can Act on AI

Artificial intelligence (AI) policy observers in the United States entered 2026 facing some big questions. Still, none are bigger than what the Federal government will do on AI policy and how it will relate to new AI laws coming from the states. Read More >>

Artificial intelligence (AI) policy observers in the United States entered 2026 facing some big questions. Still, none are bigger than what the Federal government will do on AI policy and how it will relate to new AI laws coming from the states.

With AI adoption accelerating across the economy and states continuing to advance their own policy approaches, there’s urgency for federal action — making 2026 a pivotal year for lawmakers to provide clarity, consistency, and direction through national AI legislation.

President Trump signed an executive order in December that sought to challenge or preempt state laws governing AI, while also tasking members of his Administration with generating a legislative proposal for a national policy framework.

But state lawmakers are moving forward. New York and California passed similar bills on transparency for frontier AI models and managing catastrophic risk; Colorado may revise its AI Act before it is set to come into force this summer; and state lawmakers in Washington, California, and elsewhere are moving forward on other AI bills.

How can Congress pass legislation?

Guiding Principle: Preempt With Precision

The Business Software Alliance (BSA) has long advocated for national technology laws in the US. The best way to remove barriers to AI innovation and adoption is through federal legislation grounded in areas of consensus.

There are a variety of views about how broadly federal preemption of state laws should reach. To move legislation forward this year, BSA is suggesting that policymakers adopt a simple principle: When Congress acts to establish a national approach for a specific AI issue, that federal approach should preempt state laws addressing the same issue.

By making preemption targeted and proportional to areas where Congress can agree on a workable national framework, this approach can provide clarity and consistency while also making bipartisan agreement more achievable.

By following this approach, Congress can:

  • Avoid a fragmented patchwork of state rules on the same issue;
  • Make bipartisan agreement more achievable; and
  • Allow national AI policy to develop steadily and durably.

States can and will move ahead with their ideas on AI policy, now and into the future. And while BSA consistently engages with state lawmakers to help them thoughtfully accomplish their goals, AI developers, the businesses that want to adopt AI, and consumers who want assurances that AI is being developed and used responsibly are not well-served by a piecemeal and patchwork approach.

A Flexible Approach

This principle presents some flexibility for state and national policymakers in the US.

It is important to recognize that states have a historic and critical role in protecting their people in ways appropriate for a given state. Congress can leverage some of the important work done in the states to identify key ideas and learnings to help form a national approach. At the same time, states’ valuable contributions on a wide range of issues affecting their residents — including consumer protection, combating fraud, and improving workplace protections — can continue to provide important protections.

This idea also enables Washington to take a deliberate approach to national AI policy. As the Federal government continues to advance important standards work and measurement science for AI, Congress can step in to prevent splintering on state AI issues where it is able to enact a strong national approach.

Importantly, this approach presents a way for lawmakers to reach bipartisan consensus and pass AI policies into law. While the preemption debate has encountered some challenges, a “Preempt With Precision” approach offers the best pathway toward action on national AI legislation. This approach can lead to laws that take root and provide stability, giving businesses and consumers clarity that drives broad-based adoption of responsible AI.

Putting This Principle Into Action

This principle can apply to issues that are ripe for action. Leaders in Congress have and will continue to introduce serious proposals, while the White House prepares its own legislative recommendation. As policymakers weigh these options, one issue for consideration is transparency and safety practices for frontier AI models.

Last year, California enacted Senate Bill 53, which focuses on frontier AI models — the most advanced systems — and requires safety assessments, public safety documentation, and incident reporting. At the same time, New York passed its Responsible AI Safety and Education (RAISE) Act and will soon enact chapter amendments to purposefully align it more closely with California. Importantly, these laws seek to address significant national security issues, including the proliferation of weapons of mass destruction, which are often best handled at the federal level.

To be clear: Regulation of frontier AI model safety does not encompass the universe of AI issues that policymakers can and should address. But it’s an example of where Congress could establish a national framework on an important AI issue and include preemption that’s tailored to state laws that may proliferate on the same topic.

Federal legislation could create clear, mandatory obligations for frontier AI models posing covered risks to develop a safety framework and disclose relevant information about that framework, which generally aligns with what is done in both California and New York. It could also create voluntary disclosure expectations and provide safe harbor from state transparency obligations to companies that comply with that standard.

BSA looks forward to working with federal lawmakers in the year ahead to help them thoughtfully strike the right balance between enacting national technology laws that further AI adoption and enabling state legislatures to make continued, important contributions to AI laws.

Author:

Craig Albright serves as BSA’s Senior Vice President for US Government Relations. In this role, he leads BSA’s team that drives engagement with Congress, the Administration, and all US states. He’s responsible for developing and implementing advocacy strategy to deliver results on issues across BSA’s policy agenda.

Prior to joining BSA, Albright spent four years as the World Bank Group's Special Representative for the United States, Australia, Canada and New Zealand, managing relations with government officials, private sector executives, think tank academics, civil society leaders and others. Before that, Albright spent more than 12 years in the US government. He served in the White House as Special Assistant to President George W. Bush for Legislative Affairs and Deputy Assistant to Vice President Dick Cheney for Legislative Affairs. In Congress, his positions included Legislative Director and Chief of Staff for former Congressman Joe Knollenberg of Michigan and Chief of Staff for Congresswoman Kay Granger of Texas.

Albright has been identified as one of the Top 100 association lobbyists by The Hill news organization and one of Washington’s Most Influential People by Washingtonian magazine. He is a native of the Detroit area and holds a BA in Economics from Michigan State University.

Leave a Reply

Your email address will not be published. Required fields are marked *

twenty − 16 =