Here’s how I use Attio to run my day.
Attio is the AI CRM with conversational AI built directly into your workspace. Every morning, Ask Attio handles my prep:
Surfaces insights from calls and conversations across my entire CRM
Update records and create tasks without manual entry
Answers questions about deals, accounts, and customer signals that used to take hours to find
All in seconds. No searching, no switching tabs, no manual updates.
Ready to scale faster?
Heppner, AI and Privilege: US and UK Lessons for Lawyers
ZYMBOS INTELLIGENCE — LEGAL ANALYSIS
1. Introduction: when AI meets privilege
In February 2026, a US federal court confronted a question many lawyers had quietly hoped to avoid: what happens when a criminal defendant uses a public, consumer-grade AI chatbot to “draft his defence” and those AI-generated documents are later seized by investigators? Are they protected by attorney–client privilege or work product, or are they simply discoverable like any other document?
In a memorandum dated 17 February 2026, Judge Jed Rakoff of the Southern District of New York, in United States v Heppner, held that the defendant’s AI-generated “strategy reports” — created with Claude, a publicly available generative AI chatbot operated by Anthropic — were not privileged. This was despite having been prepared after he became a known target of a grand jury investigation and despite later sharing them with his lawyers. Critically, Judge Rakoff described the question as one of first impression nationwide: no US court had previously ruled on whether communications between a user and a publicly available AI platform, in connection with a pending criminal investigation, attract attorney–client privilege or work product protection.
Because Heppner is a February 2026 decision, it should be treated as an early marker rather than the final word. But its analysis is already shaping how firms, commentators and regulators think about AI and privilege in both the United States and England & Wales.
This article uses Heppner as a springboard to examine what public AI tools mean for privilege on both sides of the Atlantic — looking first at the US decision, then at how similar behaviour would play out under legal advice privilege and litigation privilege in England & Wales, before drawing out common lessons and practical playbooks for US attorneys, UK solicitors, and barristers.
The headline is simple but uncomfortable
Public AI platforms are not privileged spaces. Treat them as third parties. If clients are feeding confidential or privileged information into those tools, they may be sacrificing protection without realising it.
2. United States v Heppner: what the court actually decided
2.1 Background and facts
Heppner is a federal criminal case in the Southern District of New York. The defendant, a corporate executive, was charged on 28 October 2025 with securities fraud, wire fraud, conspiracy to commit both, making false statements to auditors, and falsifying corporate records. The indictment alleges that Heppner defrauded investors in GWG Holdings, Inc. out of more than $150 million by making false representations and causing the company to enter into undisclosed self-serving transactions with two privately held companies he controlled. He pleaded not guilty and trial is set for April 2026.
After a grand jury subpoena issued and he was expressly told he was a target of the investigation, law enforcement executed a search warrant at his home and seized devices and paper documents. Among those documents were roughly thirty-one “reports” — transcripts or exports of his interactions with Claude. After learning he was under investigation, Heppner began typing detailed narratives about the facts and law into Claude, asking it to outline potential charges, explore possible defences, and help him structure his thinking about the case. In his view, these reports were prepared “in anticipation of a potential indictment” and reflected what he “might argue with respect to the facts and the law.”
Heppner later shared some or all of these Claude-generated materials with his defence counsel. When the government sought to review them, the defence asserted both attorney–client privilege and work product protection and asked the court to bar the prosecution from examining the documents.
2.2 Claude and its terms
Claude is a public, web-based generative AI chatbot operated by Anthropic. Like other mainstream consumer AI services, it is accessed under standard terms of use and a published privacy policy. The court highlighted three critical features of those terms:
- Anthropic collected and retained user inputs (the prompts) and outputs (Claude’s responses).
- Anthropic could use this data to improve and “train” Claude.
- Anthropic reserved the right to disclose data to a range of third parties, including service providers, regulators and governmental authorities, and in connection with claims, disputes and litigation.
Claude’s interface also contained disclaimers that it was not a lawyer, could not provide formal legal advice, and that users should consult qualified legal counsel. When the Government queried Claude directly during proceedings, it responded that it is “not a lawyer” and “can’t provide formal legal advice or recommendations,” recommending users consult a qualified attorney. These seemingly routine provisions turned out to be central to the outcome.
2.3 Why attorney–client privilege failed
US attorney–client privilege requires three core elements: a communication; between privileged persons (typically client and lawyer or their necessary agents); made in confidence for the purpose of seeking, obtaining, or providing legal advice. Judge Rakoff held that Heppner’s Claude communications failed on multiple fronts.
First, they were not communications between client and attorney. Claude was not a licensed lawyer, had no professional regulation, and no attorney–client relationship existed with it. The court rejected any suggestion of an “AI privilege,” emphasising that recognised privileges are grounded in human professional relationships with duties of loyalty and regulatory oversight, which software systems do not have.
Second, the communications were not confidential. By agreeing to Anthropic’s privacy policy, Heppner had accepted that his prompts and Claude’s responses would be logged, used for training, and could be disclosed to third parties including regulators and litigants. The court concluded he had no reasonable expectation that these interactions would remain private.
Third, the purpose element was missing. Heppner’s lawyers had not directed or supervised his use of Claude; he acted entirely on his own initiative. Claude’s own disclaimers reinforced this conclusion. The fact that he later used the outputs as a springboard for discussions with counsel did not transform the nature of the original communications.
Finally, the court rejected the argument that later sharing the Claude documents with defence counsel could retrospectively cloak them in privilege. Non-privileged documents do not become privileged merely because they pass through a lawyer’s hands.
Key risk: waiver of existing privilege
The court addressed a further and serious risk directly: if Heppner had inputted into Claude information that his lawyers had conveyed to him during the representation, then by sharing that material with Claude and Anthropic he waived any privilege that had already attached to that advice. This is distinct from whether AI outputs are privileged. It means that feeding existing privileged advice into a public AI tool can destroy protection that was already in place.
2.4 Why work product protection failed
The work product doctrine protects materials prepared in anticipation of litigation or for trial by or for a party or its representative. The court accepted that the Claude documents were created “in anticipation of litigation” — but that was not enough. Two obstacles were fatal:
- The documents were not prepared by or at the direction of counsel. Defence lawyers confirmed they had not suggested or instructed the defendant to use Claude.
- The documents did not reflect counsel’s mental impressions or litigation strategy at the time they were created. Counsel conceded that while the AI documents later “affected” strategy, they did not “reflect” counsel’s strategy when Heppner created them.
The result was stark: none of the Claude-generated documents Heppner created on his own were shielded from government review, even though they were created with litigation in mind and later shared with his lawyers.
3. What this means for US attorneys
3.1 Attorney–client privilege in an AI world
The most immediate implication for US lawyers is that communications between clients and public AI tools like Claude are, by default, outside the privilege framework. Three common scenarios illustrate the point.
Pre-instruction AI use. A potential client, worried about a brewing dispute or investigation, turns to a public chatbot such as Claude for guidance: typing “Here is what happened…” and asking “what should I do?”. Those exchanges create a written record that is not privileged. If litigation or enforcement follows, opposing parties or prosecutors may seek that record. It may contain factual admissions, unhelpful characterisations, or early inconsistent stories that can be used to challenge later testimony.
Post-instruction, unsupervised AI use. After hiring a lawyer, a client continues to use Claude privately to draft statements, summarise legal advice, or test narrative variations. As the court made clear, if they paste privileged emails or internal memos into the chatbot, they risk waiving privilege in that material — not only for the AI output, but for the original communications themselves. The client often assumes that, because the subject is legal, the channel is protected. Heppner demonstrates that assumption is mistaken.
Lawyer-directed use of public AI. A more troubling scenario arises where lawyers themselves suggest using consumer tools such as Claude to help clients “organise their thoughts” or “summarise the case.” If done via public platforms with data-sharing terms, that may both undermine privilege and fall below expected standards of care. However, the Heppner memorandum hints at a narrow but meaningful exception: had counsel directed Heppner to use Claude, the tool might “arguably” have functioned as a lawyer’s agent within the protection of attorney–client privilege. That pathway is untested and narrow — the confidentiality obstacles from Anthropic’s privacy policy would still need to be addressed — but it suggests that structured, counsel-directed and carefully documented AI use may attract different treatment in future cases.
By contrast, where attorneys use AI within controlled, enterprise-grade environments — research or drafting tools integrated into firm systems under strict confidentiality terms — the privilege analysis is much closer to that applied to traditional legal technologies. The key lies in who is directing the work, where the data resides, and whether the tool can reasonably be seen as part of the attorney’s infrastructure rather than as an external third party.
3.2 Work product in the age of generative AI
For work product, Heppner signals that courts will continue to require a meaningful connection between the materials and the attorney’s own role in preparing for litigation. Client-only AI use on public tools, even when motivated by litigation, is unlikely to qualify. The fact that a document is created under the shadow of a potential lawsuit does not make it work product if there is no attorney involvement and it does not reveal attorney strategy.
Where lawyers directly harness AI for their own work — drafting briefs, generating outlines, synthesising discovery — the output, along with their prompts, annotations and edits, can more naturally be seen as part of their mental and analytical processes. Work product arguments are much stronger in such cases, provided the environment preserves confidentiality and is genuinely under the control of the legal team. The grey areas will involve hybrid documents: AI-generated drafts heavily adapted by counsel, or counsel’s commentary on AI output. Firms should ensure those lawyer-added layers are handled inside secure systems that can support that designation.
3.3 Ethical and practical responsibilities
Heppner does not create new ethical rules, but it sharpens the contours of several existing duties.
- Competence. Lawyers must understand, at least at a functional level, how tools like Claude handle data — which tools record prompts, how long data is retained, whether it is used for training, and how providers respond to legal process.
- Supervision. Partners and supervising attorneys remain responsible for the conduct of associates, paralegals and staff. This includes setting clear boundaries for AI use and ensuring training and oversight. An internal culture where junior lawyers upload client documents to free web tools to “save time” is an obvious risk.
- Client counselling. Lawyers have a duty to warn clients about foreseeable risks. Heppner offers a concrete example to point to when explaining why forwarding privileged advice into a chatbot is functionally similar to forwarding it to a stranger — and that it may destroy protection that already exists.
3.4 Enforcement, discovery and incident response
Heppner shows that investigators will not hesitate to exploit AI-generated materials such as Claude chat logs where they can lawfully obtain them. Search warrants, subpoenas, and in some cases direct requests to AI providers may all be used to access relevant data. Discovery requests increasingly ask about the use of AI tools in relation to disputed issues.
Against that background, US firms should have two things in place:
- A clear internal policy specifying which AI tools may be used, for what purposes, and with what kinds of data — prohibiting uploading client documents or privileged content into public platforms, and requiring approvals for novel use cases.
- An incident response procedure for when privileged content has been pasted into a consumer AI: documenting what happened, assessing legal implications, considering engagement with the provider, and recalibrating privilege assertions or litigation strategy as needed.
4. England & Wales: legal advice privilege, litigation privilege, and AI
4.1 Legal advice privilege and public AI
In England & Wales, legal advice privilege protects confidential communications between a client and their lawyer made for the dominant purpose of giving or receiving legal advice. Public AI tools fail on two core elements.
First, they are not lawyers. However sophisticated their outputs, they are not solicitors or barristers, and they are not subject to the professional regulation and duties that underpin legal professional privilege. Second, confidentiality is fragile at best. Where a provider’s terms of use and privacy policy permit logging, internal reuse, training, and disclosure to third parties, any information fed into the system is being shared outside the privileged circle. Putting the contents of a privileged advice note into a chatbot in order to “summarise it in plain English” is functionally no different from handing that note to a third party who is not part of the legal team — and may waive privilege not just in the AI output but in the underlying advice itself.
4.2 Litigation privilege, third parties, and AI
Litigation privilege in England & Wales protects confidential communications between client, lawyer, and third parties where litigation is in reasonable contemplation and the dominant purpose of the communication or document is that litigation. In theory, that could offer more room to protect AI-assisted materials. In practice, confidentiality remains the stumbling block: public AI providers are third parties who are not engaged by the lawyer as part of the litigation team and who typically reserve rights to use and disclose user data.
There is a narrow path where AI tools may be closer to traditional agents. If a law firm contracts with an AI vendor as part of its legal technology stack — an on-premise or securely hosted system that does not train on client data, with stringent confidentiality obligations and technical safeguards — then work carried out through that system can often be treated like any other use of outsourced legal technology. But that is a very different situation from a client logging into a public chatbot on their personal device.
4.3 Professional obligations for solicitors
English solicitors owe robust duties of confidentiality and must uphold legal professional privilege as a fundamental client right. The Solicitors Regulation Authority has made clear that the adoption of new technologies, including AI, does not dilute those duties. Solicitors should:
- Explain to clients that public AI tools are not secure channels for privileged information.
- Warn that inputting advice, draft pleadings, or confidential documents into such tools may waive privilege and expose sensitive information, including in previously privileged communications.
- Discourage or prohibit clients from using public AI on active contentious or regulatory matters without consultation.
- Ensure that any AI used by the firm itself is subject to carefully vetted contractual terms covering confidentiality, data use, and regulatory compliance.
These obligations are not limited to solicitors in private practice. In-house lawyers must also be alive to how their organisations use AI tools and how that might affect privilege in internal investigations, regulatory interactions, and disputes.
4.4 Barristers and chambers
Barristers typically rely on instructing solicitors to manage privilege boundaries, but they are not insulated from AI-related risks. Counsel may receive drafts of witness statements, skeleton arguments, or advice that have already been passed through AI tools. Chambers should develop clear guidance on acceptable AI use, including whether and how personal accounts can be used for client work, avoiding uploading privileged material into public tools, and understanding the privilege implications where AI has been used upstream in the litigation chain. Barristers also have a role in raising concerns with instructing solicitors where they suspect that privilege may already have been compromised.
5. Convergence and divergence: US vs UK
When we set the US and English positions side by side, a clearer picture emerges.
On the convergent side:
- Both systems treat public, consumer-grade AI platforms as third parties, not as lawyers or privileged intermediaries.
- Both insist that privilege depends on genuine confidentiality; terms that allow providers to log, reuse, and disclose data are inherently at odds with that requirement.
- Neither system has shown any appetite to create a distinct “AI privilege.” Existing doctrines are stretched to cover new technology, not replaced by it.
On the divergent side, the doctrinal labels and edges differ. The US work product doctrine is, in some respects, broader than English litigation privilege, but Heppner illustrates that courts may constrain it where client-only AI use is concerned. English litigation privilege can apply to communications with third parties such as experts and agents, which might in carefully structured scenarios allow for privileged engagement with AI tools that are tightly integrated into the legal team’s infrastructure under strict confidentiality obligations.
Multinational exposure
A single act — a global executive using a public AI tool to “draft” an explanation of events — can create exposure on both sides of the Atlantic. US and UK counsel should coordinate early to understand any AI use, align their privilege positions, and avoid inconsistent representations to courts or regulators.
6. Practical playbooks for practitioners
6.1 Anticipate client behaviour
Clients are already using AI tools, often without telling their lawyers. Expect and plan for:
- Pre-instruction “brain dumps” to AI about disputes or investigations.
- Post-instruction use of AI to rewrite advice, draft emails, or test narrative variations.
- In-house teams using AI embedded in productivity suites to summarise large volumes of emails or documents, potentially including privileged material.
At the outset of a matter — particularly contentious, regulatory, or investigatory ones — lawyers should simply ask: “Have you used any AI tools in relation to this issue? Which ones, and for what?” Recording and understanding the answer early can prevent surprises later.
6.2 Build basic controls
For both US firms and UK practices, some core measures are now becoming baseline good practice:
- Engagement letters and client care documents. Include clear language explaining that public AI tools are not confidential environments and that using them with privileged or sensitive information may waive privilege — including in previously privileged material.
- Client instructions in contentious/regulatory work. In litigation holds, internal investigation notices, and other instructions, explicitly tell employees and witnesses not to “test” documents or narratives in public AI tools.
- Internal policies on AI. Distinguish between approved AI tools and use cases (internal research assistants, summarisation tools inside secure systems) and prohibited uses (uploading client documents or advice to public chatbots).
- Vendor governance. Where firms contract with third-party AI providers, ensure contracts address confidentiality, data segregation, data-processing, and responses to legal process in a way that supports privilege obligations.
6.3 Train and prepare
- Train lawyers, staff, and where appropriate clients on the basic privilege implications of AI use. Use Heppner as a concrete illustration of what can go wrong.
- Encourage a culture where people report potential AI-related missteps early, so that remediation and strategy adjustments are possible.
- Develop an incident response plan for situations where privileged content has already been uploaded to a public AI system: document the exposure, assess its impact, consider technical or contractual options, and review what it means for existing privilege assertions.
7. Conclusion: privilege will not protect itself
The rise of generative AI does not abolish attorney–client privilege or legal professional privilege, but it does make them easier to lose. United States v Heppner, decided in February 2026 as the first US ruling of its kind, is a vivid early reminder that courts will not stretch these doctrines to cover client behaviour that is fundamentally inconsistent with confidentiality. The same technologies that help clients think things through quickly also create detailed written records that may be entirely discoverable — and that can strip protection from previously privileged communications in the process.
For US attorneys and UK solicitors and barristers alike, the core message is the same. Public AI platforms such as Claude should be treated as external third parties, not as confidential sounding boards. If clients want the protection of privilege, they must keep privileged communications within the privileged circle: client, lawyer, and carefully chosen, tightly controlled agents and technologies.
With appropriate governance — contractual safeguards, secure infrastructures, clear policies, and thoughtful client counselling — AI can still play a valuable role in legal practice. But privilege will not protect itself in an AI-saturated environment. It needs to be actively defended.
References
• Harvard Law Review — United States v. Heppner (2026)
• Judge Rakoff’s Memorandum — United States v. Heppner, 25 Cr. 503 (S.D.N.Y. 17 Feb 2026)
Legal Disclaimer
This article is provided for informational and educational purposes only. It does not constitute legal advice and should not be relied upon as such in the United States, England & Wales, or any other jurisdiction. The law in this area is developing rapidly and the position may differ depending on jurisdiction, specific facts, and applicable rules of professional conduct. Readers should seek independent legal advice from a qualified attorney, solicitor, or barrister regarding their own circumstances. Nothing in this article creates an attorney-client relationship or any equivalent professional relationship.


