|
ANALYSIS
The Timeline Has Shifted. Your Compliance Programme Should Not.
The EU AI Act just got more complicated. After months of pressure from businesses arguing they did not have enough time or guidance to prepare, the European Commission proposed a package of changes known as the Digital Omnibus that would push the main compliance deadline from August 2026 to as late as December 2027. The EU's own Parliament committees voted to back a version of those changes last week. Formal negotiations between Parliament and Council are due to start in April or May.
So can you relax? The honest answer is: a little, but not much, and not for the reasons you might think.
What the law actually says right now
Until the final Omnibus text is agreed, negotiated and formally enacted, August 2026 remains the legal deadline. The negotiations have not started yet. They could take months. There is no guarantee the final text will look like what either the Commission or the Parliament committees have proposed. If your organisation is counting on December 2027, you are counting on a law that does not exist yet.
The delay is not evenly distributed
Even if the Omnibus passes broadly as proposed, the breathing room is not the same for every organisation. AI systems already on the market get the most relief, through the grandfathering clause that exempts them until significant redesign. Organisations that have not yet deployed high-risk AI systems and are planning to do so now may find they are building toward a new, shorter timeline from the point they launch.
The Omnibus also does not touch the rules already in force. Prohibited AI practices have been enforceable since February 2025. Rules for General-Purpose AI models have applied since August 2025. Those are live and not changing.
The five gaps most organisations still have not closed
Whether the deadline is August 2026 or December 2027, the same work needs to happen. Based on research from Deloitte, Gartner and independent compliance assessments, these are the five areas where most enterprises are furthest behind.
An AI inventory. Most organisations do not have a complete list of the AI tools they are using, including the ones embedded in software they already pay for. You cannot classify what you have not found.
Risk classification. The EU AI Act sorts AI systems into risk tiers: unacceptable (banned), high-risk (heavily regulated), limited risk (transparency requirements only) and minimal risk (largely unregulated). Knowing which tier each of your AI tools falls into is the foundation of every other compliance step.
Technical documentation. High-risk AI systems require documented records of design decisions, training data, testing results and performance metrics. This is the most consistently underestimated compliance burden. Teams building without documentation will struggle to reconstruct it retrospectively.
Human oversight design. For AI systems making decisions that affect people, the Act requires documented mechanisms for a human to intervene, review and override the system. Many organisations have deployed AI decision-making without designing these checkpoints in.
Data governance for training data. Demonstrating that training data is representative, sufficiently high quality and free from bias is a formal requirement for high-risk systems. Most organisations have not begun this audit.
Why the delay might increase risk for unprepared organisations
Regulators do not enforce proportionally across all organisations simultaneously. They make examples. When enforcement eventually arrives, the organisations furthest from compliance attract attention first. The Omnibus extending the deadline does not reduce the penalty for non-compliance when enforcement begins. The organisations that use the delay as an opportunity to build properly will be in a fundamentally different position from those that treat it as permission to wait.
The competitive case for acting now
Governance is increasingly a revenue question, not just a legal one. Enterprise procurement teams are beginning to require evidence of AI governance maturity before awarding contracts. Institutional investors are factoring it into valuations. Insurance underwriters are asking for documented AI risk frameworks as a condition of coverage. The organisations building these capabilities now are not just avoiding fines. They are qualifying for opportunities their unprepared competitors cannot access. The August deadline was a forcing function. The Omnibus may move it. It will not change what needs to be built.
|