Privilege, AI and risk: what clients need to know before uploading legal advice to ChatGPT or other public LLMs

Sam Thompson, 28th April, 2026

Many people are increasingly turning to public AI tools to summarise legal advice documents, sense-check strategy, and produce digestible next steps. Used carefully, these tools can offer real efficiencies. But there is one category of material that should not be treated as just another document for AI processing: privileged legal advice.

If a client uploads privileged advice into a public large language model, such as ChatGPT, Claude or Gemini, the legal position becomes uncertain very quickly. It is not safe to assume that privilege is automatically lost in every case. Equally, it is not safe to assume that privilege remains intact simply because the client never intended the advice to be shared more widely. That uncertainty is itself the problem.

The prudent answer is straightforward. Privileged legal advice should not be entered into a public Large Language Model (“LLM”).


Why this matters

For many clients, the temptation is obvious. Legal advice can be lengthy, nuanced and dense. A board member, project lead, claims manager or business owner may want a quick summary in plain English, a list of action points, or a short explanation of risk. A public AI tool appears to offer exactly that.

The difficulty is that privilege depends on confidentiality. Once privileged material is uploaded into a public LLM, the client has moved it outside the conventional lawyer-client sphere and into an external system that the client does not own or control. At that point, a series of difficult questions arise. Has confidentiality been compromised? Has privilege been waived? Could an opponent later argue that the advice should be disclosed? Could the information surface in some other way?

There are not yet any neat, settled answers to those questions under English law. That makes this less a technical curiosity and more a live risk issue.


A reminder of what is at stake

Legal professional privilege is one of the law’s most important protections. In broad terms, it allows clients to take and receive legal advice in confidence, and in some circumstances protects material created for the dominant purpose of litigation.

That protection is not merely procedural. It is what allows clients to speak candidly with their lawyers, test weaknesses in their own position, and receive robust advice without fear that an opponent will later demand sight of it.

Confidentiality sits at the centre of that protection. If confidentiality is undermined, privilege can become vulnerable. That is why businesses are rightly careful with legal advice, internal investigations, draft claims analyses and counsel’s opinions. Public AI tools introduce a new route by which that confidentiality may be challenged.


What happens when advice is uploaded to a public LLM

A public LLM is not a filing cabinet, and it is not simply a more convenient search bar. When a user uploads or pastes material into such a system, the material is processed by an external platform to generate an output. Depending on the platform, the settings in use, and the surrounding technical and contractual arrangements, the material may be retained, reviewed, or otherwise handled outside the client’s own controlled environment.

Even where a user believes that chat history has been disabled or that data will not be used for training, that is not a complete answer. Retention practices, system architecture, legal process, and third-party access issues may all complicate the picture. The client is no longer dealing with a purely internal or closed chain of custody.

That does not mean that every upload will result in publication to the world in any ordinary sense. But it does mean the client has introduced an avoidable argument about whether the confidentiality of the advice has been compromised.


Does privilege automatically disappear?

Probably not so simply.

The better view is that this is unlikely to be treated by the courts as a mechanistic, one-step rule under which privilege vanishes the moment privileged material is uploaded to a public AI tool. English law on waiver of privilege has traditionally been fact-sensitive and strongly shaped by considerations of confidentiality, consistency and fairness.

That said, clients should take no comfort from the absence of a simple automatic-waiver rule. A legal position can be uncertain and still be dangerous. Once privileged material has been entered into a public LLM, the client may have created precisely the kind of factual dispute that an opponent will later seek to exploit.

In other words, the problem is not only the risk of losing privilege. It is the risk of having to argue about it at all.


The first practical consequence: you may hand the other side an argument

In litigation, arbitration, adjudication, regulatory scrutiny or internal investigation, opponents and decision-makers look for pressure points. If there is reason to think privileged advice has been put into a public AI tool, an opponent may try to build an argument that confidentiality has been treated inconsistently and that privilege has therefore been waived or at least put in issue.

That may lead to applications for disclosure, satellite disputes about the handling of documents, arguments about fairness, and costly procedural skirmishing that distract from the real issues in the case. Even if the client ultimately succeeds in maintaining privilege, the damage may already have been done in cost, time, management focus and strategic distraction.

This is a point clients sometimes underestimate. In practice, litigation risk is not measured only by whether an argument is ultimately right. It is also measured by whether the argument can credibly be made, how expensive it is to fight, and what collateral damage is caused along the way.


The second practical consequence: information may surface in ways you cannot predict

There is a further, more uncomfortable risk. Once privileged material is placed into a public LLM, the client cannot safely assume that the advice will remain sealed off from wider access or later recovery.

That concern can arise in several ways. Information about the client’s use of the tool may emerge in an investigation, insolvency process, subsequent proceedings, or disclosure exercise. In some circumstances, third parties may seek to extract or reconstruct information from AI systems. In others, data may be preserved or produced through processes entirely outside the user’s assumptions about deletion or privacy.

Not every such route will be lawful. Not every piece of material obtained in that way will be admissible. Not every output will be reliable. But again, uncertainty is not a defence strategy. If privileged advice is too important to disclose to an opponent directly, it is too important to place into a public AI environment and hope for the best.


Public AI tools are not the same as secure internal AI use

It is important not to collapse all AI use into a single category.

A public LLM available on general terms to the market is very different from a tightly governed enterprise deployment, or a closed environment operating under specific contractual, technical and retention controls. The risk analysis is therefore not identical in every case.

That distinction matters because businesses do need realistic guidance rather than blanket slogans. The right message is not that AI must never be used in connection with legal work. It is that privileged legal advice, dispute strategy and other highly sensitive legal material should not be fed into public tools simply because they are convenient.

If an organisation wishes to use AI in legally sensitive contexts, it should do so through approved systems, with proper governance, legal oversight, and a clear understanding of where data goes, who can access it, and what retention rules apply. Many law firms are now adopting robust AI platforms as part of their own workflows, precisely because those platforms can be deployed within a more controlled legal and technical framework than a consumer-facing public tool. That does not eliminate risk, but it does allow the use of AI to sit within professional processes designed around confidentiality, supervision and auditability.

If there is to be any use of AI in relation to legal issues, the safer course is for that exercise to be carried out by lawyers who understand both the underlying legal sensitivities and the operational risks of the platform being used. In practice, that means clients should resist the temptation to upload advice into public tools themselves - or generate their own advice using these tools - and instead raise the issue with their legal team, who can decide whether AI can properly be used at all, and if so, in what environment and on what safeguards.


What clients should do instead

The safest course is the simplest one: do not upload privileged legal advice into public LLMs.

If the practical problem is that advice is too long or too complex, the better answer is usually to go back to the legal team. Ask for an executive summary. Ask for a short note of key risks. Ask for a board-ready version or a commercial action list. Good legal advice should be capable of being translated into an accessible decision-making format without sending the underlying privileged document into a public AI system.

If AI tools are to be used at all, businesses should first remove privileged content, redact identifying features where appropriate, and ensure that any tool used has been approved internally from both a legal and information governance perspective. Even then, legal teams should be cautious. Sanitisation is helpful, but it is not a magic wand.


What businesses should be doing now

This issue should not be left to individual judgment alone. Organisations should have clear internal rules on the use of public AI tools, especially in relation to legal advice, disputes, investigations, regulated material and sensitive commercial information.

At a minimum, businesses should consider whether their AI governance framework adequately addresses privilege, confidentiality, escalation procedures, approved platforms, staff training, document classification and responsibility for sign-off. In-house legal teams should be involved in that process. So should IT and information security.

For many businesses, this is no longer just an IT policy question. It is a dispute-readiness and risk-management question.


The bottom line

The law in this area is still developing. That is precisely why caution is required.

A client who puts privileged advice into a public LLM may not automatically forfeit privilege in every instance. But they may well create avoidable uncertainty, invite procedural attack, and expose sensitive legal analysis to risks that are difficult to measure and harder still to reverse.

Privilege is too valuable to test at the edge of an emerging technology.

The practical rule is simple. If the material is privileged, keep it out of public AI tools.


Author: Sam Thompson

This article is intended as a general information resource and does not constitute legal advice on any specific facts or circumstances. Specific advice should always be taken before acting.

The content on our site is provided for general information only. It is not intended to amount to advice on which you should rely. You must obtain professional or specialist advice before taking, or refraining from, any action on the basis of the content on our site.

Although we make reasonable efforts to update the information on our site, we make no representations, warranties or guarantees, whether express or implied, that the content on our site is accurate, complete or up-to-date.

Click here to view our Terms of Use