Skip to main content
  1. Blog/

EU AI Act Takes Effect — What Developers Need to Know Right Now

·1123 words·6 mins
Osmond van Hemert
Author
Osmond van Hemert
AI Industry & Regulation - This article is part of a series.
Part : This Article

On February 2nd, the first batch of provisions from the EU AI Act became enforceable. After years of legislative wrangling, the world’s most comprehensive AI regulation is no longer theoretical — it’s law. And if you’re a developer building anything that touches AI, it’s time to pay attention, regardless of where you’re based.

I’ve been following this legislation since its early drafts in 2021, and what strikes me now is how quickly we’ve moved from “this will never pass” to “this is enforceable and there are fines.” The gap between regulatory intent and developer awareness remains uncomfortably wide.

What Just Kicked In
#

The EU AI Act uses a phased enforcement approach. The provisions that became applicable on February 2nd, 2025 focus on the most critical areas:

Prohibited AI practices are now banned. This includes social scoring systems by public authorities, real-time remote biometric identification in public spaces (with narrow exceptions for law enforcement), and AI systems that exploit vulnerabilities of specific groups. It also bans emotion recognition in workplaces and educational institutions, and the creation of facial recognition databases through untargeted scraping.

AI literacy requirements are now in force. Organizations deploying AI systems must ensure that their staff have sufficient understanding of AI to operate these systems responsibly. This is vaguer than the technical provisions but signals a clear expectation that “we didn’t understand what the AI was doing” won’t fly as an excuse.

These are the “don’t be evil” provisions — the low-hanging fruit that most responsible developers wouldn’t engage in anyway. The more complex requirements around high-risk AI systems, general-purpose AI models, and transparency obligations phase in over the coming months and into 2026.

The Classification System Matters
#

The Act categorizes AI systems into risk tiers: unacceptable (banned), high-risk (heavily regulated), limited risk (transparency obligations), and minimal risk (largely unregulated). Understanding where your system falls is the first practical step.

High-risk categories include AI used in critical infrastructure, education, employment, essential services, law enforcement, and immigration. If your AI system influences decisions in these domains, you’re looking at requirements around risk management, data governance, technical documentation, human oversight, accuracy, robustness, and cybersecurity.

For most of us building developer tools, content generation systems, or business automation, we’re likely in the limited or minimal risk categories. But the boundaries aren’t always obvious. A chatbot that provides general information? Minimal risk. A chatbot that provides medical or legal advice? Potentially high-risk. The classification depends on the use case, not the technology.

General-Purpose AI Model Obligations
#

This is where it gets particularly relevant for the current AI landscape. The Act includes specific provisions for general-purpose AI models (think GPT-4, Claude, Gemini, Llama) that apply to the model providers themselves. These include:

  • Maintaining technical documentation
  • Providing information to downstream deployers
  • Complying with EU copyright law
  • Publishing sufficiently detailed summaries of training data

Models deemed to pose “systemic risk” — currently defined by a compute threshold of 10^25 FLOPs — face additional requirements including model evaluation, adversarial testing, incident reporting, and cybersecurity measures.

For developers using these models through APIs, the practical impact is indirect but real. Expect model providers to update their terms of service, potentially restrict certain use cases in the EU, and provide more detailed documentation about model capabilities and limitations. Some of this is already happening — OpenAI and Google have both been updating their compliance frameworks.

What This Means Outside Europe
#

If you’re thinking “I’m not in the EU, this doesn’t apply to me” — not so fast. The Act applies to any AI system that is placed on the market in the EU or whose output is used in the EU. If you’re building a SaaS product with AI features and you have European customers, you’re in scope.

This is the “Brussels Effect” that we’ve seen with GDPR. European regulation tends to set a de facto global standard because it’s often easier for companies to build one compliant product than to maintain separate versions for different markets. I expect a similar dynamic to play out with AI regulation.

The US is taking a very different approach — the recent executive orders have focused more on promoting AI development than restricting it, and the Stargate infrastructure announcement from a couple of weeks ago underscores the current administration’s priority on AI acceleration. This regulatory divergence creates complexity for anyone operating across both markets.

Practical Steps for Developer Teams
#

Based on my reading of the Act and conversations with colleagues navigating compliance, here’s what I’d recommend for development teams right now:

  1. Audit your AI usage: Map out where you’re using AI systems and how they influence decisions. You might be surprised how many AI touchpoints exist across your product.

  2. Classify your risk level: Use the Act’s framework to understand which tier your applications fall into. The EU AI Act Compliance Checker is a useful starting point.

  3. Document everything: Technical documentation requirements are coming for high-risk systems. Start building the habit now — document your training data, model choices, evaluation metrics, and known limitations.

  4. Review your supply chain: If you’re using third-party AI models or services, understand your obligations as a “deployer” versus a “provider” under the Act. The responsibility allocation isn’t always intuitive.

  5. Invest in AI literacy: The literacy requirement applies now. Make sure your team understands not just how to use AI tools, but their limitations, biases, and appropriate use cases.

My Take
#

I have mixed feelings about the EU AI Act. On one hand, some form of regulation is clearly needed — the pace of AI deployment has outstripped our ability to understand its societal impact, and self-regulation hasn’t been sufficient. The Act’s risk-based approach is sensible, and the focus on prohibited practices targets genuinely harmful applications.

On the other hand, I worry about the compliance burden on smaller companies and open-source projects. The Act includes some exemptions for research and open-source, but the boundaries are unclear. And the pace of AI development is so fast that regulations drafted in 2022-2023 are already struggling to keep up with the reality of 2025.

What I hope we don’t see is a repeat of the early GDPR days, where fear and uncertainty led to over-compliance and the blocking of useful services for European users. The AI Act is more nuanced than many people realize, and the risk-based approach means that most AI applications face relatively light requirements. But nuance tends to get lost in corporate compliance departments.

For now, the most important thing is awareness. Read the Act, understand where your systems fit, and start building compliance into your development process. The full enforcement timeline stretches to 2027, but the direction of travel is clear — and the earlier you start, the less painful the transition will be.

AI Industry & Regulation - This article is part of a series.
Part : This Article