The European Union’s landmark Artificial Intelligence (AI) Act just moved from paper to practice. As of Aug. 2, providers of general-purpose AI (GPAI) models face binding legal obligations, even though key guidance only landed weeks ago. The new guidance has significant implications because it describes:
- Which companies fall within the scope of GPAI;
- What they must disclose about their training data; and
- How companies are required to manage systemic risks.
Although the AI Act entered into force on Aug. 1, 2024, the industry only truly had since the second half of July of this year to prepare. That was when the three key GPAI guidance — the Code of Practice, the Guidelines, and the Training Data Summary Template — was published. The good news is that, with the new guidance, uncertainty is finally starting to lift.
The Guidelines — the most critical of the three — were needed to determine if a model is considered GPAI in the first place. Until that guidance came out, a provider would not know if it was even eligible to sign the Code of Practice or whether it needed to comply with related obligations as of Aug. 2.
Progress in the Final Package
The good news is that the final Code of Practice incorporates several important improvements that the Business Software Alliance and others have called for: stronger protections for trade secrets and proprietary information, a cleaner link to the AI Act’s scope, and an explicit nod to the EU’s copyright rules and its text and data mining exception. These changes matter for companies seeking to be transparent while protecting the integrity of their business models.
The Training Data Summary Template was also improved to be somewhat more practical than the earlier draft and now contains explicit provisions to protect confidential business information. It offers a framework for the “detailed summary” of the data used to train a model that, if applied proportionately, could enhance trust in how AI models are built.
Bottom Line: The Guidelines offer clarity to companies that were previously uncertain about their compliance obligations and how to fulfill them. They define GPAI in operational terms and provide a methodology for determining whether obligations apply. Even on a short timeline, this clarity is valuable.
Points That Need Careful Handling
Despite this progress, however, challenges remain. The Guidelines’ reliance on a compute-based threshold as part of the determination of whether it is a GPAI is a blunt and imperfect method. It risks capturing companies that adapt models for narrow, low-risk enterprise uses, far removed from the systemic risks the AI Act is meant to address.
The Code of Practice’s copyright and systemic risk mitigation chapters remain complex and needlessly burdensome for some providers. And the Training Data Summary Template’s disclosure requirements, while more balanced than before, still demand a level of detail that may be difficult to deliver in practice.
The Opportunity Ahead
Aug. 2 marks the beginning of Europe’s AI Act proving itself in practice. The next year will test how well ambition translates into workable solutions, how quickly rules can adapt to changing technology, and how effectively regulators and providers can collaborate.
If this phase is handled well, it will position Europe as a leader in the global AI ecosystem that can establish trust in the technology and deliver the innovation it promises.