Zymbos Intelligence: Issue 004
ZYMBOS AI  ·  LONDON  ·  MARCH 2026
Zymbos Intelligence
AI insight for professionals who act on what they know
Issue 004
25 Mar 2026
By John McGann  ·  London 8 min read View at zymbos.ai

Welcome to Zymbos Intelligence. Each issue cuts through the noise with curated AI intelligence, a deep-dive analysis, a practical tool review, and a ready-to-deploy prompt, plus an unfiltered take on what it all means for professionals and entrepreneurs building their knowledge in the age of AI.

01 · Intelligence Briefing

Seven stories. Straight facts. McGann's Take on each.

UK POLICY

UK Kills Its Own AI Copyright Proposal: Courts Take Over

On 18 March, the UK government published its long-awaited Copyright and AI Report. The headline: the government formally abandoned its own preferred policy, an opt-out model that would have let AI developers use copyrighted material unless rights-holders explicitly said no. The proposal attracted more than 11,500 consultation responses, the overwhelming majority against it.

What replaces it is, in essence, nothing. No new legislation. No new regulator. No mandatory transparency obligations for AI developers. The government says it will monitor the market and let commercial licensing deals develop naturally. Rights-holders keep existing legal protections, but enforcing them against AI models trained overseas remains extremely difficult without transparency tools the government just declined to introduce.

The timing was striking. The day before the report was published, Chancellor Reeves pledged a record £2.5 billion AI investment and declared the UK would achieve the fastest AI adoption in the G7.

Read more →

McGann's Take

The UK pledged to be the G7's fastest AI adopter and declined to resolve the copyright question in the same week. That tension will not hold. If you are building AI products that train on UK content, document your datasets now. The courts will eventually provide what Parliament would not.

EU REGULATION

EU AI Act: Is the Digital Omnibus a Simplification or a Rollback?

The EU's Digital Omnibus, a legislative package proposed in November 2025 to simplify parts of the EU AI Act (the world's first comprehensive law regulating artificial intelligence), has moved quickly this month. On 13 March the Council of the EU agreed its negotiating position. On 18 March, the European Parliament's two lead committees voted 101 in favour, 9 against, to adopt amendments replacing a vague "when standards are ready" compliance trigger with fixed calendar dates.

High-risk AI obligations, covering AI systems that affect people's jobs, finances, health or safety, are now proposed to apply from December 2027 at the latest, roughly 18 months beyond the original August 2026 deadline. The European Parliament plenary vote is expected imminently, with formal negotiations due to begin in April or May.

The framing is contested. The European Commission presents the changes as proportionate relief for businesses that lacked adequate guidance. Critics, including a joint opinion from the EDPB and EDPS (the EU's independent data privacy watchdogs), have called it a significant weakening of digital protections. Some have gone further, describing it as a massive rollback.

Read more →

McGann's Take

Whether this is pragmatic calibration or genuine dilution of the world's most ambitious AI framework depends heavily on what survives the negotiation process. The direction of travel, covering softer obligations, extended timelines and reduced burden for smaller businesses, is unmistakable. Watch what gets traded away before calling it a win.

EU REGULATION

What the Omnibus Actually Changes: Five Points for Enterprise Teams

Beyond the headline delay, the Digital Omnibus introduces structural changes with direct operational implications. Here are the five that matter most.

1. Grandfathering for existing systems. AI systems already on the market when obligations come into force are exempt until they undergo significant redesign. This replaces the previous hard August 2026 cutoff for existing deployments.

2. Relief for smaller businesses. A new "Small Mid-Cap" category, covering companies with up to 750 employees or up to €150 million in annual turnover, gets a reduced compliance burden. Requirements previously limited to SMEs (small and medium enterprises, meaning businesses with under 250 staff) are extended to this wider group.

3. Looser data rules for AI training. The Omnibus proposes allowing special category personal data, the most sensitive type under GDPR (the General Data Protection Regulation, the EU's data privacy law), to be used for AI bias detection. It also clarifies that "legitimate interest" can be used as the legal basis for training AI models on personal data, something previously unclear and contested.

4. More time for general-purpose AI providers. Providers of GPAI (General-Purpose AI) models, specifically the large foundation models that underpin tools like ChatGPT and Gemini, already on the market get until February 2027 to update their documentation and governance processes.

5. AI literacy obligation removed. The general requirement for all organisations to ensure staff have adequate AI literacy is abolished. Specific training requirements for high-risk AI deployers remain.

Read more →

McGann's Take

The grandfathering clause and the SMC relief are genuinely significant. But reduced burden is not the same as no obligation. The core requirements of risk classification, human oversight and audit trails remain intact. Do not let the headline delay become an excuse to stop building.

ENTERPRISE AI

Deloitte: Governance Readiness Is Falling Behind the Adoption Curve

Deloitte's State of AI 2026 report paints a clear picture of the gap between how fast organisations are deploying AI and how well they are governing it. AI tool access is up 50% year-on-year, with 60% of employees now having access to AI tools. Yet governance readiness, meaning the ability to oversee, control and account for how those tools are used, sits at just 30%, and that figure is down on last year.

Only 25% of organisations have converted 40% or more of their AI pilots into live production systems. Talent readiness, meaning having people who can actually manage AI responsibly, stands at 20%.

Read more →

McGann's Take

Speed of access without depth of governance is the dominant enterprise pattern right now. That gap is where EU AI Act enforcement will find its first targets. Reputational damage will arrive well before any fine does.

 
Zymbos Intelligence: Issue 004 Block 2
ENTERPRISE SECURITY

Nine in Ten Enterprises Are Loosening Security Controls to Speed Up AI

A global study by Delinea, surveying more than 2,000 IT decision-makers, found that 90% of organisations are actively pressuring security teams to relax identity controls in order to accelerate AI deployment. The largest visibility gap involves non-human identities, meaning the system accounts operated by AI agents rather than real people. These accounts often hold elevated permissions, run continuously, and are almost never reviewed with the same rigour as human access.

Gartner has listed agentic AI governance, meaning governing AI systems that act autonomously on behalf of your organisation, as a top cybersecurity priority for 2026.

Read more →

McGann's Take

Agentic AI is multiplying attack surfaces faster than governance frameworks are adapting. If your AI agents have more access than your CISO can account for, that is not an IT problem. It is a board-level liability.

INFRASTRUCTURE

Sovereign AI: Where Your Model Runs Is Now a Compliance Variable

AWS launched its European Sovereign Cloud this month, a physically separate cloud environment designed to operate independently within the EU and meet EU regulatory requirements. Similar sovereign AI infrastructure is emerging across the Gulf, Asia and other regions where governments want AI systems to operate within their jurisdiction and under their rules.

The practical implication: where an AI system is hosted, trained and governed is becoming a procurement requirement, not a background technical detail.

Read more →

McGann's Take

Enterprise teams with EU operations should be asking vendors not just what their model does, but where it runs. That question will appear in procurement frameworks and enterprise contract requirements within 12 months.

UK ENFORCEMENT

Ofcom Opens Investigation Into Grok on X

In January 2026, Ofcom, the UK's communications regulator, announced a formal investigation into the use of the Grok AI chatbot on X (formerly Twitter) under the Online Safety Act. It follows an ongoing investigation into an AI character companion service and a previous fine issued to an AI-powered site for failing to implement age verification requirements.

The UK has no dedicated AI law. But that does not mean no regulatory risk. Ofcom, the ICO (Information Commissioner's Office, the UK's data privacy regulator), the CMA (Competition and Markets Authority) and the FCA (Financial Conduct Authority) are all actively applying existing law to AI systems in their respective sectors.

Read more →

McGann's Take

Most organisations are tracking AI regulation through the wrong lens, waiting for a dedicated AI Bill that has not arrived. The enforcement is already happening through frameworks you are already supposed to comply with. That is the part worth paying attention to.

02 · Deep Intelligence
ANALYSIS

The Timeline Has Shifted. Your Compliance Programme Should Not.

The EU AI Act just got more complicated. After months of pressure from businesses arguing they did not have enough time or guidance to prepare, the European Commission proposed a package of changes known as the Digital Omnibus that would push the main compliance deadline from August 2026 to as late as December 2027. The EU's own Parliament committees voted to back a version of those changes last week. Formal negotiations between Parliament and Council are due to start in April or May.

So can you relax? The honest answer is: a little, but not much, and not for the reasons you might think.

What the law actually says right now

Until the final Omnibus text is agreed, negotiated and formally enacted, August 2026 remains the legal deadline. The negotiations have not started yet. They could take months. There is no guarantee the final text will look like what either the Commission or the Parliament committees have proposed. If your organisation is counting on December 2027, you are counting on a law that does not exist yet.

The delay is not evenly distributed

Even if the Omnibus passes broadly as proposed, the breathing room is not the same for every organisation. AI systems already on the market get the most relief, through the grandfathering clause that exempts them until significant redesign. Organisations that have not yet deployed high-risk AI systems and are planning to do so now may find they are building toward a new, shorter timeline from the point they launch.

The Omnibus also does not touch the rules already in force. Prohibited AI practices have been enforceable since February 2025. Rules for General-Purpose AI models have applied since August 2025. Those are live and not changing.

The five gaps most organisations still have not closed

Whether the deadline is August 2026 or December 2027, the same work needs to happen. Based on research from Deloitte, Gartner and independent compliance assessments, these are the five areas where most enterprises are furthest behind.

An AI inventory. Most organisations do not have a complete list of the AI tools they are using, including the ones embedded in software they already pay for. You cannot classify what you have not found.

Risk classification. The EU AI Act sorts AI systems into risk tiers: unacceptable (banned), high-risk (heavily regulated), limited risk (transparency requirements only) and minimal risk (largely unregulated). Knowing which tier each of your AI tools falls into is the foundation of every other compliance step.

Technical documentation. High-risk AI systems require documented records of design decisions, training data, testing results and performance metrics. This is the most consistently underestimated compliance burden. Teams building without documentation will struggle to reconstruct it retrospectively.

Human oversight design. For AI systems making decisions that affect people, the Act requires documented mechanisms for a human to intervene, review and override the system. Many organisations have deployed AI decision-making without designing these checkpoints in.

Data governance for training data. Demonstrating that training data is representative, sufficiently high quality and free from bias is a formal requirement for high-risk systems. Most organisations have not begun this audit.

Why the delay might increase risk for unprepared organisations

Regulators do not enforce proportionally across all organisations simultaneously. They make examples. When enforcement eventually arrives, the organisations furthest from compliance attract attention first. The Omnibus extending the deadline does not reduce the penalty for non-compliance when enforcement begins. The organisations that use the delay as an opportunity to build properly will be in a fundamentally different position from those that treat it as permission to wait.

The competitive case for acting now

Governance is increasingly a revenue question, not just a legal one. Enterprise procurement teams are beginning to require evidence of AI governance maturity before awarding contracts. Institutional investors are factoring it into valuations. Insurance underwriters are asking for documented AI risk frameworks as a condition of coverage. The organisations building these capabilities now are not just avoiding fines. They are qualifying for opportunities their unprepared competitors cannot access. The August deadline was a forcing function. The Omnibus may move it. It will not change what needs to be built.

 
Zymbos Intelligence: Issue 004 Part 2
03 · Tool on Trial
ENTERPRISE AI GOVERNANCE

Holistic AI

Visit holisticai.com →  ·  Enterprise pricing (custom)  ·  No free trial

Holistic AI is a UK-founded platform that helps organisations get on top of their AI risk and compliance obligations. It covers risk assessment, bias checking, ongoing monitoring and mapping your AI systems against the major regulatory frameworks that enterprises are now legally required to follow.

Given that this issue is focused on the EU AI Act and what businesses actually need to do before the compliance deadline, it felt like the right moment to put a platform like this under the microscope.

A word of honesty upfront: this is an enterprise-only product. There is no free trial, no self-serve access and no published pricing. Most readers will not be able to log in and test it. Everything in this review is based on publicly available information. The Zymbos Score reflects that, and we have been transparent about what we could and could not verify.

What we assessed it against

Regulatory coverage

Holistic AI maps against the EU AI Act (the European law this issue covers in depth), NIST AI RMF (the US National Institute of Standards and Technology's AI Risk Management Framework, the default global standard for responsible AI governance), and ISO 42001 (the International Organisation for Standardisation's dedicated AI governance standard, essentially a quality mark for how organisations manage their AI responsibly). These three together cover the bases for most UK and EU enterprises.

HIGH

UK/EU market fit

UK-founded, with a documented focus on how the EU AI Act interacts with GDPR (the General Data Protection Regulation, the EU data privacy law most organisations already know well). Specific data residency details, meaning whether your data stays on UK or EU servers rather than being processed elsewhere, require direct confirmation from the vendor before signing anything.

HIGH

Certification status

SOC 2 Type II (an independent audit confirming a platform handles your data to a rigorous security standard) and ISO 27001 (the international benchmark for information security management) are not publicly confirmed. Both are standard requirements in enterprise procurement. Verify these directly before any contract conversation.

LOW

GRC integration

GRC stands for Governance, Risk and Compliance. Most large organisations already run platforms for this, including ServiceNow, Archer and OneTrust. If Holistic AI cannot connect to those, you risk running two separate systems that do not talk to each other. Some public evidence of integration capability exists, but direct validation against your specific tools is essential before committing.

MEDIUM

Agentic AI governance

Agentic AI refers to AI systems that act autonomously inside your organisation, making decisions and taking actions without a human approving each step. Holistic AI's Guardian Agents capability is a genuine differentiator in a market where this is a fast-growing requirement. Whether it is fully production-ready at enterprise scale is not confirmed from public sources.

MEDIUM

Pricing transparency

Fully custom pricing with no published figures. This is common in the enterprise governance category but it makes budget planning harder and extends procurement timelines. Expect a significant commercial conversation before you see a number.

LOW

Implementation

Moderate to high effort based on available user feedback. This is not a quick-deploy solution. Ask the vendor for a realistic time-to-first-value estimate and references from comparable customers before signing.

MEDIUM