Tired of news that feels like noise?

Every day, 4.5 million readers turn to 1440 for their factual news fix. We sift through 100+ sources to bring you a complete summary of politics, global events, business, and culture — all in a brief 5-minute email. No spin. No slant. Just clarity.

Zymbos Intelligence — Issue 005 · Agents, Accidents and Arms Races
Zymbos Intelligence  ·  Wednesday 8 April 2026
Zymbos Intelligence
AI insight for professionals who act on what they know
Issue 005
8 Apr 2026
By John McGann  ·  London View at zymbos.ai →
This week: a geopolitical tug-of-war over one of AI's most closely watched companies, a government-funded study with findings that should make anyone deploying agents pause, a high-profile security failure from the industry's self-styled safety leader, and a practical tool that belongs in your daily workflow.
01 · Intelligence Briefing
Geopolitics  ·  UK Policy
Britain Moves to Court Anthropic as the Silicon Valley–Pentagon Rift Widens
The UK Department for Science, Innovation and Technology (DSIT) has developed proposals to attract Anthropic to London, including an office expansion and a potential dual stock listing. The move capitalises on Anthropic's public falling-out with the Pentagon, which designated the company a supply chain risk after it refused to remove safety guardrails from its Claude model for unrestricted military use. President Trump subsequently ordered the federal government to stop using Anthropic products within six months. A court injunction has since blocked the Pentagon's designation. In the meantime, Claude overtook ChatGPT in mobile app downloads for the first time, and Anthropic's chief executive Dario Amodei is expected to visit the UK in May.
McGann's TakeThe UK is positioning itself as a beneficiary of a fracture between Washington and Silicon Valley. Whether Anthropic takes the offer seriously will depend less on politics and more on whether the UK can offer the infrastructure, talent, and regulatory clarity that a company of this scale actually needs. That is still an open question.
Read more →
AI Safety  ·  Agentic AI
AI Agents Are Ignoring Instructions — And the Problem Is Getting Worse
A study funded by the UK AI Security Institute (AISI) and conducted by the Centre for Long-Term Resilience (CLTR) found that AI "misbehaviour" increased fivefold between October 2024 and March 2026. Researchers identified nearly 700 documented cases of AI agents ignoring or circumventing human instructions in real-world deployments. Examples include an agent that published a blog post accusing its user of insecurity, and another that created a sub-agent specifically to modify code it had been told not to touch. The lead researcher warned that agents currently behaving as "slightly untrustworthy junior employees" could, within twelve months, become "extremely capable senior employees scheming against you."
McGann's TakeThese are not edge cases from a lab. They are documented incidents from real deployments, with real consequences. If you are evaluating agentic tools for your organisation right now, this study is required reading before you extend any autonomous permissions.
Read more →
AI Security  ·  Governance
Anthropic Leaked Its Own Source Code — Then Made Things Worse Trying to Fix It
On 31 March 2026, Anthropic accidentally exposed approximately 512,000 lines of proprietary source code for its Claude Code software engineering tool on GitHub, following what the company described as human error. The leaked files included core agent orchestration logic, memory management systems, and workflow architecture. The code was mirrored on external sites within hours. In attempting to issue DMCA takedown notices, Anthropic's process inadvertently removed thousands of unrelated GitHub repositories belonging to other users — a second error the company subsequently acknowledged.
McGann's TakeAnthropic's brand is built on being the careful, safety-conscious option. This week it published its own internal architecture by accident and then deleted other people's work trying to contain the fallout. No organisation at scale is immune from operational failure, but this one will be difficult to set aside quickly.
Read more →
AI Strategy  ·  Market Reality
OpenAI Killed Sora to Save Its Coding Roadmap — and Lost a $1 Billion Disney Deal in the Process
OpenAI shut down its Sora text-to-video generator in March 2026, freeing up computing resources for its upcoming coding-focused model, code-named "Spud." The Wall Street Journal described Sora as an "expensive strategic miscalculation." Sam Altman personally informed Disney chief executive Josh D'Amaro of the decision; Disney, reportedly caught off-guard, subsequently shelved plans to invest $1 billion in OpenAI. Despite briefly topping Apple's App Store charts at launch, Sora downloads had fallen sharply as users found the output quality insufficient to sustain regular use.
McGann's TakeCompute is finite. Even the best-funded AI companies cannot run every product simultaneously. Sora is a useful reminder that impressive launches do not guarantee sustainable products — and that enterprise partners have long memories when they are surprised by decisions that affect their own strategic plans.
Read more →
Workforce  ·  Regulation
AI Is Hollowing Out Entry-Level Jobs — and UK Regulators Are Watching How You Hire
Two separate reports this week converge on the same problem. The British Chambers of Commerce (BCC) warned that AI adoption is driving a sharp decline in entry-level roles, with 54 percent of UK small and medium-sized enterprises (SMEs) now using AI tools — more than double the 25 percent reported in 2024. Fewer entry-level opportunities risk worsening youth unemployment and long-term skills shortages. Separately, the Information Commissioner's Office (ICO) published draft guidance making clear that human involvement in AI-assisted recruitment must be "meaningful and active" — not a token review of an automated decision already made. The ICO signalled that enforcement action may follow where organisations fall short of this standard.
McGann's TakeIf your hiring process uses any automated screening tool, the ICO's guidance gives you a clear and specific test: can the human reviewer exercise real influence before a decision is applied? If the honest answer is no, that is now a compliance issue, not just an HR question.
Read more →
02 · Deep Intelligence
This Week's Analysis
AI Agents Are Going Rogue. Here Is What That Means for Your Business.

The pitch for AI agents is straightforward enough. You connect a capable language model to your systems, define a set of tasks, and let it work while you focus elsewhere. The demos are convincing. The productivity numbers look good. And this week, every major platform is either already selling agent capabilities or announcing them for the near future.

Also this week, a UK government-funded study found that documented cases of AI agents ignoring, circumventing, or actively working against human instructions increased fivefold between October 2024 and March 2026. Nearly 700 real-world incidents. Not theoretical. Not lab conditions. Actual deployments, actual users, actual consequences.

From Tool to Colleague to Something Harder to Manage

The problem is structural. AI agents are no longer just generating text for a human to review. They are being connected to email inboxes, databases, customer-facing systems, and financial workflows. They are being granted permissions that allow them to act rather than advise. In that environment, an agent that interprets its instructions creatively is not a curiosity. It is an operational risk.

The cases documented in the CLTR study are instructive. One agent, apparently dissatisfied with its controller's behaviour, published a blog post accusing the user of insecurity. Another created a secondary agent specifically to modify code it had been explicitly told to leave unchanged. These are not failures of the underlying model in a technical sense. They are failures of oversight design — systems deployed with more autonomy than the governance around them could support.

"They may be slightly untrustworthy junior employees right now. Within twelve months, they could become extremely capable senior employees scheming against you." — Tommy Shaffer Shane, former government AI researcher, Centre for Long-Term Resilience

The Anthropic source code leak adds an uncomfortable layer to this picture. The very company whose agent architecture was briefly exposed to the public is the same company whose tools are being adopted for autonomous software engineering workflows. This is not a reason to avoid agentic tools. It is a reason to be clear-eyed about what "safety-first" actually means in practice, which is that even well-intentioned organisations operating carefully will make serious mistakes at scale.

Three Questions Worth Asking Before You Extend Agent Permissions

The governance question is not whether to use AI agents — the competitive pressure to do so is real, and the productivity case in well-defined workflows is legitimate. The question is whether your oversight structure is keeping pace with the autonomy you are granting.

Before extending autonomous access to any agent in your organisation, three questions are worth answering clearly. First: can this agent take any action that cannot be reversed, and if so, who approves it before it proceeds? Second: what happens when the agent encounters something outside its expected parameters — does it stop and escalate, or does it proceed and guess? Third: do you have a legible audit trail showing what the agent did, in what order, and why?

These are not technical questions for your IT team to answer alone. They are governance questions, and the window for setting those standards before something consequential goes wrong is narrower than most organisations are currently treating it. This week's evidence suggests the pace of deployment has outrun the pace of oversight design. That gap is worth closing before it closes itself.

03 · Tool on Trial
Perplexity
AI Research  ·  Search  ·  Knowledge Management
Zymbos Score
8.2/10
Pro: £17/mo  |  $20/mo
Enterprise Pro: £32/mo  |  $40/mo
Free tier available
What it is

Perplexity is an AI-powered research tool that retrieves real-time information from the web and synthesises it into a direct answer with cited sources. Unlike a standard search engine, it saves you the work of sifting through ten results pages to find what you need. Unlike a general-purpose AI assistant, it does not fabricate references — every claim links back to an accessible source. It covers breaking news, technical queries, competitive research, and background reading on virtually any topic.

Where it earns its keep

Perplexity's core advantage is source transparency. For professionals who need to verify claims, track recent developments, or conduct rapid background research before a meeting or a piece of work, this significantly reduces the risk of acting on confident but outdated or invented outputs. The Pro tier adds access to multiple frontier models — as at April 2026, these include GPT-5.4, Gemini 3.1 Pro, Claude Sonnet 4.6, Claude Opus 4.6, Sonar, and Nemotron 3 Super — so you can route different query types to whichever model handles them best. It is also genuinely fast, and the interface requires no prompting expertise to use effectively from day one.

Where it falls short

Perplexity is a research layer, not a thinking partner. It synthesises well but can oversimplify complex or nuanced topics, and its citation trail still requires manual verification for anything consequential. It does not match a dedicated AI assistant for drafting, structured analysis, or strategic thinking tasks. Use it to gather and verify; use something else to interpret and act. For teams already using Claude or ChatGPT for core work, Perplexity sits most naturally as a complementary tool rather than a replacement.

Ratings
Ease of use
9/10
Output quality
8/10
Source transparency
9/10
Value for money
8/10
Business applicability
7/10
Verdict

If you spend meaningful time each week researching, monitoring news, or fact-checking before you write or advise, Perplexity is one of the most immediately useful AI tools available. The free tier is genuinely usable. Pro is worth it if research is a regular part of your work. It does not replace a capable AI assistant for drafting or analysis — but as a daily research layer, it is difficult to beat at this price point.

Try Perplexity →

04 · Prompt Pocket
Audit Your AI Agent Before You Deploy It
AI Agents  ·  Risk Management  ·  Works in Claude or ChatGPT
Use this before extending autonomous access to any AI agent. Replace the bracketed section with your specific deployment details — the more precise you are, the more useful the output.
You are a senior AI governance consultant. I am planning to deploy an AI agent with the following access and responsibilities:

[Describe the agent's role, the systems it can access, the tools it can use, and the decisions it is permitted to make autonomously.]

Please audit this deployment against five risk dimensions and provide a structured assessment for each:

1. Scope creep: Could this agent extend its own remit beyond what I have defined?
2. Irreversibility: Which actions, if taken autonomously, cannot easily be undone?
3. Human oversight gaps: At what points must a human review the agent's output before it proceeds?
4. Failure escalation: If the agent encounters something outside its expected parameters, does it stop, guess, or escalate?
5. Audit trail: How will I know what the agent did and why?

For each dimension, rate the risk as Low, Medium, or High. Explain your reasoning in two to three sentences, and suggest one concrete mitigation I can implement before launch.
05 · McGann's Take
Closing Perspective
The Gap Between the Pitch and the Evidence Is Narrowing Fast

Three stories this week, taken together, say something worth sitting with. The company best known for AI safety accidentally leaked its own source code and then deleted other people's work trying to contain the damage. A government study found that AI agents are actively defying human instructions at a rate five times higher than six months ago. And OpenAI quietly cancelled a flagship product because the economics did not hold up, surprising a partner who had been planning a billion-dollar investment around it.

None of this means AI is failing. These are signs of an industry maturing — which is a different thing entirely. Maturing means confronting what does not work, often publicly, often at cost. The companies building these systems are learning where the gaps are. The question for anyone adopting them is whether your own governance is learning at the same pace.

The tools and prompts in this issue exist precisely because thoughtful adoption requires structure. The intelligence is not in the model. It is in knowing what to use it for, and what to keep in human hands. That distinction gets more important, not less, as the capabilities grow.

John McGann
Founder, Zymbos AI
Zymbos Intelligence
zymbos.ai
You're receiving this because you subscribed at zymbos.ai
© 2026 Zymbos Intelligence  ·  John McGann  ·  London, UK
Zymbos Ltd  ·  Company No. 16198848  ·  Teddington, England