Tweet Artificial Intelligence

AI Refill: What’s on the EU’s Policy Radar

Nearly a year after the AI Act was adopted, Brussels remains focused on artificial intelligence (AI) policy. Multiple strategies, consultations, and guidelines are shaping how the European Union will regulate AI, with real implications for enterprise software companies worldwide. Read More >>

Nearly a year after the AI Act was adopted, Brussels remains focused on artificial intelligence (AI) policy. Multiple strategies, consultations, and guidelines are shaping how the European Union will regulate AI, with real implications for enterprise software companies worldwide.

The questions remain: What qualifies as a general-purpose AI (GPAI) model? Who’s responsible for what in the AI value chain? How can compliance work without slowing innovation?

Here are five developments, along with insight into how BSA has advocated for practical rules.

1. Apply AI Strategy: Lab to Real World

The European Commission’s Apply AI Strategy aims to move technologies from the lab into the real world and encourage AI adoption.

BSA’s response emphasizes two points:
• Stick to clarifying existing rules rather than revisiting settled questions; and
• Ensure digital sovereignty doesn’t exclude established international partners.

Companies are already investing in compliance and need stable guidance, not reinterpretation. Meanwhile, global tech firms have invested heavily in EU compliance, local infrastructure, and regional teams, and these efforts deserve support, not new barriers.

2. GPAI Guidelines: Big Models, Blurry Lines

Currently, two major GPAI deliverables remain pending:
• The training data summary template (due at the end of June); and
• The Code of Practice.

Meanwhile, the Commission ran a public consultation on draft guidelines, which aims to clarify the implementation of the GPAI models’ obligations under the AI Act. The draft raised serious questions, such as whether the amount of training compute should determine if a model qualifies as “general-purpose.” Modern model advances mean high-performing systems can use less compute, while smaller models with large training resources could be wrongly captured. It’s like judging a car’s danger by engine size alone. What matters is how it is actually used, not how it is built.

BSA called for a practical approach based on real-world uses and risks. Downstream modifiers should only be responsible for their specific changes, not the full risk profile of the base models they adapt and deploy. To go back to the car example: If you change the tires on a car, you shouldn’t suddenly be held responsible for how the whole vehicle was built.

3. One Code of Practice to Rule Them All

Speaking of the Code of Practice, BSA joined 10 industry groups calling for major improvements to the latest draft. While the Code should provide useful guidance, the latest version exceeds legal requirements, introducing copyright rules beyond the AI Act that conflict with existing EU law, plus excessive transparency demands.

Together with our co-signatories, we asked the Commission to align the Code with the AI Act itself and promote flexibility over rigid prescriptions. Without this shift, the Code risks creating more uncertainty rather than clarity.

4. International Digital Strategy: Welcome Signals

The Commission’s recently published International Digital Strategy for the European Union confirmed its commitment to open digital markets and international cooperation. For the enterprise software sector, openness isn’t a slogan; it’s how modern systems are built and delivered. Cross-border infrastructure, global talent, and international collaboration are part of how software works at scale.

The strategy’s emphasis on cooperation and collaboration with like-minded partners is welcome. The challenge now is ensuring consistent application across AI, cloud, and cybersecurity policies.

5. High-Risk AI Business

The European AI Office just launched a consultation on high-risk system rules, focusing on Article 6 of the EU AI Act, its annexes, and tackling value chain responsibilities. This directly impacts the BSA members who provide enterprise AI for sensitive applications, from hiring to public services.

The consultation runs until July 18, with guidelines due by Feb. 2, 2026. BSA is preparing a detailed response on how obligations should work in complex value chains. It’s a process that will keep us busy this summer and one that will play a key role in shaping how the AI Act works in practice.

Never a Dull Moment

There is never a dull moment in EU tech policy, and AI is no exception. From how models are defined to where responsibilities fall, every guideline and consultation helps shape how the AI Act plays out in practice. BSA will remain engaged on all fronts and continue to share updates as the work progresses.

Author:

Hadrien Valembois is Director, Policy – EMEA at BSA | The Software Alliance in Brussels, Belgium. In this role, he works with BSA members to develop and advance policy positions on a range of key issues, with a focus on data and cloud policies. Before joining BSA, Valembois was a Policy Officer at the Europe Office of the International Trademark Association (INTA), where he advocated issues pertaining to intellectual property, the fight against counterfeiting, the digital single market, cybersecurity, Brexit, AI, and blockchain in front of EU institutions and Member States. Previously, Valembois was a Senior Manager at the Brussels-based lobbying firm Europtimum. Valembois holds an LL.M. in International Legal Studies from Georgetown University Law Center in Washington, DC He also holds a master’s degree in law, a master’s degree in international relations, and a certificate in philosophy from the Catholic University of Louvain in Belgium.

Leave a Reply

Your email address will not be published. Required fields are marked *

3 × one =