Judges gain AI access but are warned it must not replace legal reasoning or breach sensitive data rules
Judges in England and Wales have been officially granted access to AI-powered tools on their personal computers, according to newly updated guidance from HM Courts and Tribunals Judiciary. The move signals a cautious but deliberate step towards integrating artificial intelligence into the judicial workflow—while raising questions about the risks of hallucinated judgments and digital fakery in the courtroom.
The seven-page guidance, an update on the original issued in late 2023, introduces Microsoft’s Copilot Chat as a secure AI assistant now available to judicial office holders via their eJudiciary accounts. Unlike public chatbots, Copilot is a private Microsoft product whose data inputs and outputs remain protected—provided judges remain logged into their official systems.
Despite the added convenience, the judiciary is taking no chances. The document, co-signed by the Lady Chief Justice, the Master of the Rolls, and other senior figures, doubles down on caution. Judges are urged to treat AI as a secondary tool—useful for summarising large documents or preparing presentations—but entirely unsuitable for legal analysis or decision-making.
“The current public AI chatbots do not produce convincing analysis or reasoning,” the guidance warns. “Do not enter any information into a public AI chatbot that is not already in the public domain. Any information that you input… should be seen as being published to all the world.”
Embed from Getty ImagesThe expanded document includes a glossary of key terms such as “AI agent” and “hallucination”—a nod to the growing concern over generative tools producing fabricated legal cases or misleading arguments. Tips are also offered to help judges identify AI-generated submissions, including unfamiliar citations, US case references, American spelling, and arguments that appear polished but are riddled with legal inaccuracies.
Judges are specifically warned that AI is already being used to create fake evidence, including doctored text, images, and videos. These red flags, the document states, could signal an AI-generated filing:
- Submissions referring to unrecognisable or American-style citations
- Multiple parties referencing differing legal precedents for the same issue
- Arguments that deviate from settled legal understanding
- Overly persuasive text masking glaring legal flaws
The document does not shy away from addressing the inherent risks of AI misuse, noting that outputs “will inevitably reflect errors and biases in its training data,” despite efforts to correct these through alignment techniques.
However, the tone is not entirely sceptical. The judiciary acknowledges that, when used properly, AI could help lighten the administrative burden. “There is no reason why generative AI could not be a potentially useful secondary tool,” the guidance notes—so long as it’s kept far from core judicial functions such as interpreting the law or weighing evidence.
The updated rules also place responsibility for AI usage squarely on the shoulders of litigants. Judges are advised to remind parties that they are accountable for any AI-generated information submitted to court, just as they would be for traditional evidence.
The rollout of Copilot Chat reflects a growing trend across professional sectors toward responsible AI adoption. But in the justice system, where lives and livelihoods are often at stake, the stakes are higher. The judiciary’s position is clear: AI may be helpful, but it must never become a substitute for legal expertise, human judgment, or the rule of law.