ChatGPT had 1.8 billion UK visits in the first eight months of 2025. With the meteoric rise in the use of Artificial Intelligence (AI) tools such as Google’s Gemini, OpenAI’s ChatGPT and Anthropic’s Claude, we are inevitably starting to see the effects of this trickling into employment practices and litigation.
A recent case from the United States considered whether any type of legal professional privilege can apply to the input entered into, or output produced by, public AI chatbots (United States of America v Heppner 25 Cr. 503 (JSR)). Our U.S. colleagues reviewed this case and set out their thoughts, which you can read in full here. In summary, the Judge found that the AI documents produced by the defendant were not privileged. One of the bases for this finding (at least in relation to attorney-client privilege) was that there was no reasonable expectation of confidentiality due to the AI chatbot’s terms and conditions of use.
Application of privilege rules in England
Any piece of Employment Tribunal litigation in England which deals with the intersection between AI and legal professional privilege will hinge on the specific facts of the case. This will include whether the use of AI was by a lawyer, and the terms and conditions of the precise AI tool used.
The Employment Tribunal in England has not yet considered whether ‘communications’ with AI can be protected by legal professional privilege. However, the Immigration and Asylum Chamber of the Upper Tribunal has done this, albeit in a cursory way. It stated in a recent decision that “… to put client letters and decision letters … into an open source AI tool, such as ChatGPT, is to place this information on the internet in the public domain, and thus to breach client confidentiality and waive legal privilege … Closed source AI tools which do not place information in the public domain, such as Microsoft Copilot, are available for tasks such as summarising without these risks”.
This decision suggests that, where a “public” AI tool is used (i.e., an AI system made available for anyone to access for free or a small fee), there is a real risk that there is no confidentiality in any inputs or outputs and therefore no prospect of those inputs or outputs being privileged, whoever drafts the inputs and whatever the outputs say. This is predicated on public AI tools typically being made available on the basis that the AI company is permitted to use inputs and outputs as training data for the AI model or otherwise to improve their products and services. However, in practice, there are nuances to this. Some public AI tools (particularly those subject to higher fee premiums) are made available on terms that do not permit the AI company to use inputs or outputs for purposes other than making the service available, and so are in effect akin to what the judge refers to as a “closed source” AI tool. Incidentally, “open source AI” has an altogether different meaning, which is not the subject of the judgment or this article.
By “closed source” tool, the judgment is referring to private AI systems where there are confidentiality arrangements in place between the user and the provider of the AI system:
- Legal advice privilege is still unlikely to apply where the private tool is used by a non-lawyer, since a client’s use of an AI tool is not a communication between them and a lawyer (although there may be a claim to privilege if the inputs or outputs were evidence of a privileged communication); and
- Litigation privilege may apply, depending on whether the ‘dominant purpose’ and ‘existing or reasonably contemplated litigation’ limbs are met on the specific facts.
As this area of law continues to develop, we recommend that:
- In the first instance, you should err on the side of caution and assume that, when interacting with a public AI chatbot, any user input (whether by a lawyer or a client), or AI output, is not protected by legal professional privilege;
- You should also avoid uploading privileged documents to a public AI tool because that may result in a loss of privilege over those documents;
- If you do want to try to ensure that inputs or outputs are protected by privilege, ensure that a closed, enterprise AI tool is used, and review the terms of the specific AI platform. For example, the court in Heppner noted that Claude’s privacy policy at the relevant time explicitly stated that users consent to Anthropic collecting data on users’ inputs and Claude’s outputs, and that it reserved the right to disclose such data to third parties such as governmental regulatory authorities;
- Bear in mind that even where a private tool is used, a claim to privilege will only arise in limited circumstances (for example when a lawyer is using it for the purposes of legal advice);
- Provide training to HR teams and managers regarding the use of AI, particularly in relation to documents and communications which may become disclosable in Employment Tribunal proceedings;
- Be conscious of the potentially disclosable documents you create through using AI, and how privilege may be lost in any privileged documents which are uploaded to AI; and
- Seek legal advice before you input anything into AI, and consider whether any AI input could be completed by a lawyer to protect privilege.
[View source.]