Skip to main content
  1. Blog/

The EU AI Act Compliance Clock Is Ticking — What Developers Need to Know

·962 words·5 mins
Osmond van Hemert
Author
Osmond van Hemert
AI Industry & Regulation - This article is part of a series.
Part : This Article

The EU AI Act’s provisions on prohibited AI practices took effect earlier this year, and the next wave of requirements — covering high-risk AI systems — is now firmly on the horizon. If you’re a developer building AI-powered applications that serve European users, the compliance clock is ticking, and it’s time to think about what this means for your code, your architecture, and your development processes.

Understanding the Risk Categories
#

The AI Act categorizes AI systems into risk tiers, and this categorization has direct implications for how you build and deploy software. At the top are prohibited practices — things like social scoring systems, real-time biometric surveillance (with narrow exceptions), and manipulative AI techniques. Most developers won’t encounter these, but it’s worth understanding the boundaries.

The category that will affect the most development teams is high-risk AI systems. This includes AI used in employment decisions, credit scoring, education assessment, critical infrastructure management, and law enforcement. If your AI system influences decisions in any of these domains, you’re subject to substantial requirements around documentation, testing, human oversight, and transparency.

What catches many teams off guard is how broadly “high-risk” can be interpreted. A resume screening tool? High-risk. An AI-powered educational tutoring system that affects student assessments? Potentially high-risk. A predictive maintenance system for energy infrastructure? High-risk. The scope is wider than many developers initially assume.

Practical Technical Requirements
#

Let me break down what the high-risk requirements mean in practice for a development team:

Technical documentation: You need comprehensive documentation of your training data, model architecture, training process, and evaluation metrics. This isn’t just a README — the Act expects documentation sufficient for a regulator to understand how your system works and why it makes the decisions it does. If you’re using fine-tuned foundation models, you need to document both the foundation model’s characteristics and your fine-tuning process.

Data governance: Training data must be “relevant, representative, free of errors, and complete.” In practice, this means implementing data lineage tracking, bias auditing, and quality assurance processes for your training pipelines. If you’re using synthetic data, you need to document how it was generated and verify it doesn’t introduce systematic biases.

Logging and monitoring: High-risk systems must maintain logs of their operation, with enough detail to enable post-hoc analysis of decisions. This has real architectural implications — you need to design your inference pipeline to capture inputs, outputs, confidence scores, and any intermediate reasoning steps in a way that’s auditable and tamper-resistant.

Human oversight: There must be mechanisms for human oversight, including the ability to override or halt the AI system. This means building admin interfaces, kill switches, and escalation pathways into your application architecture from the start, not bolting them on later.

What This Means for Your Architecture
#

If I were designing a high-risk AI system today with EU AI Act compliance in mind, my architecture would look different from what most teams build. Here’s what I’d prioritize:

Observability-first design: Every inference call gets logged with full context. I’d use something like OpenTelemetry to instrument the entire pipeline, from data ingestion through preprocessing, inference, and post-processing. These logs need to be immutable and retained for the periods specified in the Act.

Model versioning and reproducibility: Every model version must be traceable back to its training data and configuration. Tools like MLflow or DVC aren’t optional luxuries — they’re compliance necessities. You need to be able to recreate any model version that was ever in production.

Bias testing as CI/CD: Fairness and bias evaluations should run as part of your continuous integration pipeline, not as quarterly manual reviews. Define your fairness metrics, write automated tests, and fail the build if they regress. This is the same principle we apply to performance testing — make it automated and continuous.

Circuit breakers and human-in-the-loop: Design your system with the assumption that a human will sometimes need to intervene. That means building queuing systems for uncertain predictions, alerting for anomalous patterns, and administrative interfaces for review and override.

The General-Purpose AI Angle
#

For teams using foundation models through APIs — calling GPT, Claude, Gemini, or similar services — the Act places specific obligations on the providers of these “general-purpose AI models.” Providers must supply technical documentation, comply with copyright rules, and publish summaries of training data.

But that doesn’t absolve you as the deployer. If you build a high-risk application on top of a general-purpose model, you’re still responsible for the application-level compliance requirements. The foundation model provider handles their obligations; you handle yours. Understanding this boundary is crucial for your compliance strategy.

My Take
#

I’ll be honest: regulation often makes me nervous as a developer. Bureaucratic requirements can slow down innovation and create compliance theater that doesn’t actually improve outcomes. But having studied the AI Act in detail, I think the technical requirements are largely reasonable. Documentation, testing, monitoring, and human oversight aren’t bureaucratic overhead — they’re good engineering practices that too many AI teams skip in the rush to ship.

The teams that will struggle most are those that have been treating AI development like a prototyping exercise: minimal documentation, no version control for data or models, no systematic bias testing. The AI Act is essentially mandating the engineering rigor that should have been there all along.

My advice: don’t wait for enforcement actions to start taking this seriously. Begin by auditing your existing AI systems against the risk categories. If anything falls into high-risk territory, start implementing the technical controls now. The compliance deadline will arrive faster than you think, and retrofitting compliance into an existing system is always more expensive than building it in from the start.

This is one of those areas where the developer landscape is shifting beneath our feet, and proactive preparation beats reactive scrambling every time.

AI Industry & Regulation - This article is part of a series.
Part : This Article