Tweet Artificial Intelligence

Ambition Is the Word: Understanding the EU AI Act

Given the significance of this legislation, it is important to take a closer look at how the AI Act will regulate AI, its approach to risk, and what kind of AI systems the Act covers. Read More >>

The European Union is one step closer to adopting the world’s foremost comprehensive regulatory framework for artificial intelligence (AI) following a significant procedural step this month on the EU’s AI Act.

The AI Act cleared a major milestone when EU Member States formally agreed to its text on February 2, demonstrating a unanimous stance toward the regulation of AI within the EU. The Act will undergo votes for formal adoption in the European Parliament in mid-February at the Committee level and later in the Plenary session in April.

The AI Act’s ambition extends not only to its scope but also to its timeline. Some provisions, such as the prohibition of certain AI use cases, are slated to take effect as soon as six months after the Act’s entry into force. Given the significance of this legislation, it is important to take a closer look at how the AI Act will regulate AI, its approach to risk, and what kind of AI systems the Act covers.

A Risk-Based Approach – for Almost All AI

The AI Act was originally intended to establish a risk-based framework for AI. In other words, customizing compliance obligations based on AI risk levels involves identifying high-risk uses that need stricter regulation and determining which AI systems should be outright banned in the EU. As the Act developed, it also eventually included new rules for General Purpose AI (GPAI) models (i.e., models that can be deployed in a multitude of possible uses), regardless of the risk they may pose.

BSA’s global advocacy on AI has long emphasized the need to keep a risk-based approach to artificial intelligence regulation and the need for compliance obligations to reflect a company’s role along the AI Value Chain. More specifically, the AI Act establishes a complex system of obligations, for high-risk AI and GPAI:

Obligations for High-Risk AI Developers and Deployers

The Act requires AI Developers — “Providers” in the AI Act parlance — of high-risk AI systems (e.g., AI used in recruitment, banking, credit scoring, and safety components of critical infrastructure) to prepare a risk management framework; carry out data governance policies to ensure accuracy, robustness, cybersecurity, and minimize bias; and establish a quality management system to monitor the performance of the AI.

AI Deployer companies have separate obligations. They are required to ensure an appropriate level of AI literacy for the employees deploying AI and monitor the performance of the system as it carries out its functions.

Rules for General Purpose AI

Certain AI technologies, such as the increasingly popular generative AI systems, will have to comply with specific transparency requirements, regardless of risk, by disclosing when content has been artificially manipulated or generated.

For GPAI models, developers will need to provide technical documentation and cooperate with customers to share relevant development information. This collaboration is particularly crucial when GPAI models are deployed in high-risk settings, where the deploying entity bears responsibility for compliance with the Act. Additionally, GPAI models that pose systemic risks will be subject to rigorous risk assessments and additional development activities, such as adversarial testing.

Complex Compliance

The complex compliance obligations outlined in the AI Act will be monitored by an equally intricate governance framework. The European Commission will soon establish an AI Office that will have coordination and monitoring responsibilities, especially concerning GPAIs. Member States will designate national supervisory authorities tasked with monitoring high-risk AI and coordinating the work of the various sectoral market surveillance authorities that may be involved as part of their functions (if, say, AI is used in the employment sector, for instance).

An Ambitious Set of Objectives for an Ambitious Timeline

The Act’s ambition extends beyond its immediate implementation; while other countries are also developing AI regulations, the EU aims to lead the way by setting comprehensive standards for AI much as it did on privacy – drawing parallels to the General Data Protection Regulation (GDPR). The EU also ensured that the AI Act reflects international work, chiefly that of the Organisation for Economic Co-operation and Development (OECD) and National Institute of Standards and Technology (NIST), in defining what artificial intelligence is for the purposes of legislation.

The AI Act also sets a very ambitious pace for its obligations to take effect, with obligations on GPAI already being applicable 12 months from entry into force and obligations for high-risk within 24 months. In the meantime, the AI Office to be established by the Commission will have nine months to draft a Code of Practice to support compliance for GPAI. The AI Office and the European Commission will also prepare key guidelines and secondary legislation on a variety of very important subjects, among which:

  • To further define which kinds of AI will be considered AI risk;
  • How to establish technical standards for transparency;
  • How to comply with high-risk obligations; and
  • How to allocate contractual responsibilities along the AI Value Chain.

The Act additionally calls for companies to adopt yet-to-be-developed standards for AI. The Commission and the AI Office are likely to increase staff to take on the significant work ahead to implement the AI Act.

Ultimately, the AI Act recognizes the need for collaboration and consultation with stakeholders, including industry, to incorporate into the development and implementation of regulations. As the AI Act progresses towards full implementation, BSA will play a crucial role in representing the interests of B2B AI developers and deployers before EU institutions and around the world.

Author:

Matteo Quattrocchi is Director, Policy – EMEA at BSA | The Software Alliance in Brussels, Belgium. In this role, he works with BSA members to develop and advance policy positions on a range of key issues, with a focus on artificial intelligence, copyright and government access to data. Prior to joining BSA, Quattrocchi was a Public Affairs Specialist at the U.S. Mission to the EU where he coordinated the outreach to EU stakeholders on Economic Affairs, with a focus on digital, energy, trade and environmental issues. Quattrocchi earned an LL.M. in International Legal Studies at the Georgetown University Law Center, and a Master’s degree in International and European Law at LUISS Guido Carli, in Rome, Italy. Quattrocchi speaks English, Italian and French. He is based in BSA’s Brussels office.

Leave a Reply

Your email address will not be published. Required fields are marked *

two × three =