Tweet Artificial Intelligence

AI Policy Readiness: 2024 Outlook

Governments are moving steadily toward better AI policy readiness, as a new BSA analysis of nine different jurisdictions recently indicated. But with 2024 as the year for elections, it raises the question of whether the direction and consistency of policy will remain on course. Read More >>

Governments are moving steadily toward better artificial intelligence (AI) policy readiness, as a new BSA analysis of nine different jurisdictions recently indicated.

But with 2024 as the year for elections, it raises the question of whether the direction and consistency of policy will remain on course.

AI Policymaking Increases as Elections Loom

The pace of policymaking on AI continues to increase around the world. A remarkable feature of these developments is that most of the efforts have remained focused on similar objectives: creating the opportunity for AI to improve competitiveness of other industries while limiting the risk of AI being misused (intentionally or not), focusing regulations primarily on high-risk use cases.

BSA’s AI policy readiness index evaluated nine different national governments and multilateral groups based not only on whether appropriate AI laws or guidance have issued, but also on whether other key areas that impact responsible AI are being addressed: privacy, cybersecurity, cross-border data transfers, intellectual property, and workforce preparedness.

At the same time, more than two dozen democracies will have voted in general elections by the end of 2024, including several of the largest (India, the US, and the EU), and three of the seven nations comprised in the G7. So what does it mean for AI policy?

Election Impacts: Early Indicators

The UK presents a good example. Elections in early July swept a Labour majority and a new Prime Minister, Keir Starmer, to government, replacing a previous government that had not planned to pursue AI regulation.

The King’s speech last week included a plan for AI regulation, signaling that the UK could now act on AI regulation. In our readiness index, we noted that the UK did not yet have regulations governing AI risk specifically, but it does already have in place many of the other important laws and policies to be prepared for responsible AI. If new regulation is well tailored and risk-based, new guidance or regulation could further improve the UK’s AI readiness.

The elections in the US will certainly shape AI policy as well. Both presidential candidates are emphasizing different priorities, though there are also areas of consistency. For instance, the US’s guidance (through NIST) on responsible AI development began in the previous administration and was completed in the current one, indicating that it is both bipartisan and likely to remain an important touchstone for companies.

At the same time, with so much AI action taking place in state capitols from Connecticut to Colorado to California, state elections in the US may have as much impact on commercial rules for AI as the federal elections.

Good News, Bad News?

BSA’s AI readiness index indicating where some governments have not yet created guidance or laws focused directly on responsible AI, but a crucial finding was that there were no red X marks anywhere – those would indicate where a government’s policy has actively impeded AI use.

That’s the good news; but that doesn’t extinguish the possibility of policies moving in a more negative direction.

If governments make it harder to train AI, or harder to use the data that ensures AI is representative of all communities; if there’s incompatibility between regulatory systems; if protectionist instincts override the benefits of digital solidarity among like-minded governments, we may end up with a different readiness map in the future. As new governments begin formulating AI-related policies in the coming months, it will be important for them to work together and build digital solidarity – not with identical policies, but with interoperable laws and guidance that allow all to benefit.

Author:

Aaron Cooper serves as Senior Vice President, Global Policy. In this role, Cooper leads BSA’s global policy team and contributes to the advancement of BSA members’ policy priorities around the world that affect the development of emerging technologies, including data privacy, cybersecurity, AI regulation, data flows, and digital trade. He testifies before Congress and is a frequent speaker on data governance and other issues important to the software industry.

Cooper previously served as a Chief Counsel for Chairman Patrick Leahy on the US Senate Judiciary Committee, and as Legal Counsel to Senator Paul Sarbanes. Cooper came to BSA from Covington and Burling, where he was of counsel, providing strategic guidance and policy advice on a broad range of technology issues.

Cooper is a graduate of Princeton University and Vanderbilt Law School. He clerked for Judge Gerald Tjoflat on the US Court of Appeals for the Eleventh Circuit.

Leave a Reply

Your email address will not be published.