Due to a technical Issue our newsletter was cut short. Here is the missing section

04 · Prompt Pocket
COMPLIANCE

The EU AI Act Risk Classifier

Not sure which of your AI tools would be caught by the EU AI Act? This prompt walks you through a structured inventory and classification process, one tool at a time. You do not need a legal background to use it.

Copy this prompt

You are an EU AI Act compliance advisor helping a non-specialist business professional classify their AI tools by risk level.

I am going to describe an AI tool or system my organisation uses. Your job is to ask me a series of plain-English questions to understand what the tool does, who it affects, and how decisions are made. Based on my answers, classify the tool into one of the four EU AI Act risk tiers:

1. Unacceptable risk: banned outright (for example, social scoring systems or tools that manipulate people's behaviour without their knowledge)

2. High risk: heavily regulated, requires documentation, human oversight and compliance steps before deployment (for example, tools used in hiring, credit scoring, education assessment or healthcare)

3. Limited risk: allowed but must meet transparency requirements, such as telling users they are interacting with an AI

4. Minimal risk: largely unregulated, such as spam filters or AI-powered recommendations

After classifying the tool, tell me in plain English: what this classification means for my organisation, what I would need to do to comply if it is high risk, and what questions I should ask my vendor if I did not build the tool myself.

Start by asking me to describe the first AI tool I want to classify.

How to use it: Paste the prompt into Claude, ChatGPT or your preferred AI tool. When it asks you to describe your first tool, be specific, name what the tool does, who uses it, and what decisions it influences. Work through your tools one at a time. Keep the outputs and use them as the starting point for a proper AI inventory.

05 · McGann's Take

The compliance conversation around AI is framed almost universally as a burden. The cost of documentation. The overhead of governance. The disruption of building oversight into systems that were shipped without it.

That framing is understandable. It is also wrong.

The organisations that are quietly building AI governance frameworks right now are not doing it because a regulator told them to. They are doing it because they have worked out that governance maturity is becoming a condition of entry into the markets that matter. Enterprise procurement is asking for it. Institutional investors are factoring it in. Insurers are requiring it. The businesses that can demonstrate responsible, documented, auditable AI are qualifying for contracts and partnerships their competitors cannot access.

The EU AI Act deadline may move. The Digital Omnibus is real and the relief it offers to smaller organisations is genuine. But the direction of travel has not changed. More accountability, more transparency, more documented evidence of how AI systems make decisions. That is where every major market is heading. The only question is whether you build that capability under pressure, or ahead of it.

The organisations that treat the next 12 months as a window are the ones that will look back on this period as an advantage. The ones that treat it as a reprieve are storing up a different conversation for later.

Build now. It will not get easier.

John McGann
Founder, Zymbos AI

You are receiving this because you subscribed at zymbos.ai

Zymbos AI  ·  London  ·  [email protected]