Skip to main content
  1. Blog/

EU AI Act GPAI Rules — Six Months In, and the Compliance Clock Is Ticking

·1000 words·5 mins
Osmond van Hemert
Author
Osmond van Hemert
AI Industry & Regulation - This article is part of a series.
Part : This Article

Six months ago, in August 2025, the EU AI Act’s provisions for general-purpose AI (GPAI) models officially came into force. We’re now past the initial grace period and squarely into the era where non-compliance has real consequences. If you’re building anything that touches foundation models in Europe — or for European customers — this matters to you.

I’ve spent the last few weeks reviewing how the landscape has shifted since those provisions kicked in, and the picture is more nuanced than either the doomsayers or the optimists predicted.

What the GPAI Rules Actually Require
#

Let’s start with the basics, because I still encounter teams that haven’t fully internalized what’s required. The Act distinguishes between standard GPAI models and those classified as presenting “systemic risk” — essentially models trained with compute exceeding 10^25 FLOPs, though the Commission can update these thresholds.

For all GPAI providers, the requirements include maintaining up-to-date technical documentation, providing clear information to downstream deployers, implementing a copyright compliance policy, and publishing a sufficiently detailed summary of training data. That last one has been the sticking point for most organizations I’ve spoken with.

For systemic-risk models, the bar is considerably higher: adversarial testing, incident monitoring and reporting to the AI Office, cybersecurity protections, and energy consumption reporting. The major labs — OpenAI, Google DeepMind, Anthropic, Mistral — have all published their compliance frameworks, though the depth and transparency varies significantly.

The Documentation Burden Is Real
#

In my experience working with teams integrating GPAI models into production systems, the documentation requirements have been the most disruptive change. Not because they’re unreasonable — they’re actually quite sensible from a governance perspective — but because most organizations simply weren’t set up to produce this level of documentation about their AI pipelines.

The technical documentation requirement under Annex XI is comprehensive. You need to describe the model architecture, training methodology, data sources and preprocessing, evaluation results, computational resources used, and known limitations. For teams that have been moving fast and iterating quickly on model fine-tuning and RAG pipelines, this means retroactively documenting decisions that were made informally.

I’ve seen a few approaches emerge. Some larger organizations have hired dedicated AI governance teams. Others have integrated documentation requirements into their CI/CD pipelines — essentially treating model cards and data sheets as build artifacts that must pass review before deployment. The latter approach resonates more with my engineering sensibilities: if compliance is part of the pipeline, it doesn’t become an afterthought.

Open Source Gets a (Partial) Pass
#

One aspect that deserves attention is how the Act treats open-source GPAI models. There’s a partial exemption: open-source models released under permissive licenses are exempt from some documentation and transparency requirements, unless they present systemic risk. This was a hard-fought compromise during the legislative process, and it’s proving to be a meaningful differentiator.

Projects like Mistral’s open-weight releases and Meta’s Llama family have benefited from this carve-out. But there’s an important subtlety: the exemption applies to the model provider, not to deployers. If you take an open-source model and deploy it in a high-risk application in the EU, you inherit the full compliance burden for your specific use case.

This has created an interesting dynamic. I’m seeing more organizations choosing open-source base models not just for cost or flexibility reasons, but specifically because the compliance pathway is clearer and more manageable when you have full visibility into the model’s architecture and training process.

The Codes of Practice Are Still Taking Shape
#

The European AI Office has been working with industry stakeholders to develop codes of practice for GPAI providers. These codes are supposed to provide detailed guidance on how to meet the Act’s requirements — think of them as the practical “how” behind the legal “what.”

As of now, the drafts I’ve reviewed cover transparency, copyright compliance, and risk assessment, but they’re still being refined. The challenge is striking the right balance between specificity (which providers need for actual implementation) and flexibility (which prevents the codes from becoming obsolete as the technology evolves).

For smaller companies and startups, the codes of practice may actually be a lifeline. Without them, the Act’s requirements are high-level enough that interpretation becomes expensive — you either need specialized legal counsel or you over-engineer your compliance approach, wasting resources either way.

My Take: Pragmatic Progress with Growing Pains
#

I’ll be honest: six months ago, I was skeptical about whether these regulations would achieve anything beyond creating paperwork. But I’ve come around somewhat. The documentation requirements, while burdensome, have forced organizations to be more intentional about their AI development practices. Teams that used to treat model selection and deployment as purely technical decisions are now considering governance implications from the start.

The energy consumption reporting requirement for systemic-risk models is also quietly significant. We’re starting to get real data about the environmental cost of large-scale AI training, and that transparency will only become more important as these models grow.

That said, enforcement remains the open question. The AI Office has limited resources, and the interaction between EU-level oversight and national authorities is still being worked out. The next six months — leading up to the full enforcement of the high-risk AI system requirements in August — will tell us a lot about whether this framework has teeth.

For now, my advice to any team building with AI in or for Europe: don’t wait for perfect clarity. Start documenting your models, data pipelines, and deployment decisions today. The organizations that treat this as an opportunity to improve their engineering practices — rather than a regulatory burden to minimize — will be in the strongest position regardless of how enforcement develops.

This is part of a broader pattern I’ve been tracking in this series: AI development is maturing from a “move fast and break things” discipline into something that looks more like traditional software engineering, with governance, documentation, and accountability baked into the process. Whether that’s a good thing depends on your perspective, but it’s clearly the direction we’re heading.

AI Industry & Regulation - This article is part of a series.
Part : This Article