Bar Council updates AI guidance after lawyers mistakenly cite fabricated cases in court
The Bar Council has issued updated guidance on the professional use of artificial intelligence following a rise in cases where lawyers have inadvertently cited fabricated authorities in court. The refreshed advice responds to recent High Court rulings that highlighted the risks of relying on outputs generated by systems such as ChatGPT and other large language models. It also reflects rapid growth in the use of AI technology across the legal profession.
The guidance makes clear that while AI tools can assist with research and case preparation, barristers must understand the systems they use and must not delegate professional judgment to software. It emphasises that responsibility for accuracy, confidentiality and compliance with professional obligations remains entirely with the barrister, regardless of the technology involved.
According to the Bar Council, the updated document highlights several core risks associated with AI use. These include the risk of hallucinations, meaning the confident generation of information that is entirely fabricated, and the risk of information disorder, where accurate information becomes mixed with inaccurate content. The guidance also identifies bias within training data, technical mistakes, cybersecurity vulnerabilities and the absence of conscience or social and emotional intelligence within AI systems.
Embed from Getty Images
Barristers are urged to take the time to understand how tools such as Google Gemini, Perplexity, Harvey and Microsoft Copilot operate before incorporating them into their work. The guidance stresses that a failure to understand the limits of these systems increases the likelihood of errors that may mislead the court or compromise client confidentiality. The Bar Council describes human oversight as essential and warns that AI should never be treated as a substitute for verified legal research.
Barbara Mills KC, chair of the Bar Council, said recent case law underlined the dangers of misusing artificial intelligence in legal practice. She noted that the High Court had highlighted the serious implications for public confidence when fabricated judgments are placed before the court. She added that the growth of AI in the legal sector is developing rapidly and that barristers who understand how the tools function will be better equipped to use them responsibly.
The guidance references academic studies that have examined the reliability of AI-assisted legal research. It also reminds practitioners that authoritative and verifiable research sources remain available through the Inns of Court libraries. Barristers are encouraged to confirm the accuracy of any AI-generated material by cross-checking with trusted primary sources.
The document further outlines responsibilities relating to confidentiality, data protection and intellectual property. It clarifies that these principles apply equally to general-purpose tools and to specialist legal technology platforms built using large language models.
The Bar Council said it would continue monitoring developments in AI and would update its guidance as necessary to help barristers maintain high professional standards.