As the European Union begins to implement its landmark Artificial Intelligence (AI) Act, it has initiated a process to develop a tailored, voluntary Code of Practice (CoP) for General-Purpose Artificial Intelligence (GPAI) models that can be adapted for a wide variety of uses.
Under the EU AI Act, GPAI models are regulated based on the use of the model, using a risk-based approach: the higher the risk, the higher the obligations. Those models that pose “systemic risks” — those that may have a broad and significant impact — will be subject to specific rules and oversight. Developers of such models will be asked to notify the European Commission, implement risk mitigation strategies, and ensure their systems comply with cybersecurity standards. Other GPAI models will have obligations focused on transparency and technical documentation.
To kick off the process of drafting the CoP, the EU held a structured consultation process from Aug. 30 to Sept. 18, gathering valuable feedback from stakeholders. The resulting input will shape the initial draft of the CoP, focusing on three core areas:
- Transparency and copyright-related provisions for GPAI models, including a detailed summary of the copyrighted works used in training these models;
- A taxonomy of risks, along with assessment and mitigation strategies for GPAI models considered to pose systemic risks; and
- Approaches to effectively monitor and review the upcoming CoP.
The European Commission is set to draft the CoP and aims for it to enter into force in August 2025. BSA | The Software Alliance welcomes this collaborative approach of involving a wide range of stakeholders — from academia and member states to industry and civil society — in the drafting of the Code. This process is structured around four working groups: Transparency and Copyright, Categorization of Risk and Assessment, Identification of Mitigation Measures, and Governance and Internal Risks Assessments.
BSA — representing the global software industry — is playing an active role in this consultation. In our feedback, we highlighted two key points:
- Systemic Risk Obligations: We believe that any obligations related to systemic risks must not expand the scope of the AI Act, either in what constitutes systemic risk or the monitoring obligations that it provides.
- Respect for Copyright: We strongly advocate for the 2019 EU Copyright Directive’s text and data mining exception (found in Articles 3-4) to be fully respected. Developers should be able to use publicly available data for training GPAI models without facing overly restrictive regulations or new requirements to disclose proprietary information about how a model is trained.
As the regulatory framework takes shape, BSA — part of the Copyright Working Group — will continue to advocate for balanced policies that promote responsible innovation. We will continue to engage with policymakers to ensure that the rules governing GPAI are both effective and adaptable to the fast-evolving nature of AI technology.