Skip to main content
  1. Blog/

The EU AI Act Passes Parliament — What It Means for Developers

·852 words·4 mins
Osmond van Hemert
Author
Osmond van Hemert
AI Industry & Regulation - This article is part of a series.
Part : This Article

Yesterday, the European Parliament voted to approve the AI Act with a decisive majority. It’s the first comprehensive regulatory framework for artificial intelligence anywhere in the world, and whether you’re building AI systems in Europe or not, this is going to shape how we develop and deploy AI for years to come.

Having watched regulation cycles in tech for three decades — from data protection directives to GDPR — I can tell you this vote matters more than most developers realize. The AI Act doesn’t just affect the big players. It reaches deep into the development stack.

The Risk-Based Framework
#

The Act classifies AI systems into four risk categories: unacceptable, high, limited, and minimal risk. Unacceptable risk systems — things like social scoring by governments or real-time biometric surveillance in public spaces — are banned outright. High-risk systems face the strictest requirements: think AI used in hiring, credit scoring, law enforcement, or critical infrastructure.

What’s interesting from a developer’s perspective is the high-risk category. If you’re building an AI-powered recruitment screening tool, a medical diagnostic assistant, or an automated loan approval system, you’ll need to implement:

  • Risk management systems throughout the AI lifecycle
  • Data governance with strict requirements on training data quality
  • Technical documentation that would make most engineering teams sweat
  • Human oversight mechanisms built into the system design
  • Accuracy, robustness, and cybersecurity standards

The documentation requirements alone are substantial. You need to be able to explain how your training data was collected, what biases exist, and how you’ve mitigated them. For teams used to moving fast and iterating, this is a significant shift in how you approach the development lifecycle.

General-Purpose AI Models Get Special Treatment
#

One of the more contentious additions during the parliamentary process was the treatment of general-purpose AI (GPAI) models — essentially foundation models like GPT-4, PaLM, and their open-source counterparts. The Parliament’s version requires GPAI providers to:

  • Disclose that content was generated by AI
  • Design models to prevent generation of illegal content
  • Publish summaries of copyrighted training data used

That last point is going to be particularly thorny. The transparency requirements around training data are something the major AI labs have been actively avoiding. OpenAI, Google, and others have been increasingly opaque about what data goes into their models. The EU is pushing back hard on that.

For developers building on top of these foundation models — through APIs, fine-tuning, or otherwise — there’s a cascade effect. If the foundation model provider needs to comply, the requirements flow downstream. You’ll need to understand what compliance obligations transfer to you as an application developer.

The Open Source Question
#

There’s been significant debate about how the Act treats open-source AI. The current text includes some exemptions for open-source components, but the boundaries are fuzzy. If you release an open-source model that gets used in a high-risk application, where does your responsibility end?

The Parliament’s position seems to be that open-source developers aren’t automatically liable for downstream uses, but the trilogue negotiations with the Council and Commission will determine the final language. This is something the open-source AI community needs to watch very closely.

I’ve been involved in open-source projects for most of my career, and regulatory uncertainty is poison for community-driven development. Clear carve-outs for open-source research and development are essential, or we risk pushing AI innovation entirely behind corporate walls.

What Happens Next
#

The parliamentary vote is a major milestone, but it’s not the finish line. The trilogue — three-way negotiations between the Parliament, European Council, and European Commission — starts now. This is where the real horse-trading happens. The Council’s version of the Act is notably less strict in several areas, so expect some watering down.

Even after agreement, there’s a transition period. Most obligations won’t kick in for 18 to 24 months after the final text is adopted. But if you’re planning AI-powered products or services, the time to start thinking about compliance architecture is now, not when the clock is ticking.

My Take
#

I’ve been through enough regulatory cycles to know that the initial developer reaction is usually panic, followed by resignation, followed by “actually, this isn’t so bad.” GDPR followed exactly that pattern. The companies that took it seriously early gained a competitive advantage.

The AI Act will be similar. Yes, the compliance burden is real, especially for high-risk applications. But the frameworks it requires — risk assessment, documentation, human oversight, data governance — are things we should be doing anyway. Most AI failures I’ve seen in production trace back to exactly the gaps this regulation targets: poor training data quality, no human fallback, inadequate testing for bias.

The developers and teams who build these practices into their workflows now will be ahead of the curve. The ones who wait for the final text and then scramble will be playing catch-up.

Whether you agree with every provision or not, comprehensive AI regulation was inevitable. The EU fired the first shot. Others will follow. Best to be ready.

This is part of my ongoing series on AI in Development, tracking how artificial intelligence is reshaping the software engineering landscape.

AI Industry & Regulation - This article is part of a series.
Part : This Article