Law Society issues updated guidance on social media, warning of AI-driven misinformation
The Law Society of England and Wales has urged solicitors to treat AI-generated social media content with caution, warning that misinformation, deepfakes, and “clickbait” now pose professional and reputational risks for the legal sector.
In new guidance issued this week, the Society said solicitors must verify the accuracy of AI-assisted content and understand how their use of online platforms aligns with obligations under the Solicitors Regulation Authority (SRA) principles and code of conduct.
Mark Evans, president of the Law Society, said social media continues to be an invaluable professional tool but warned that rapidly advancing artificial intelligence has changed the digital landscape for lawyers.
“AI and machine-learning have been increasingly integrated into social media platforms,” Evans said. “These technologies can support users by assisting in content creation and improving engagement, enabling solicitors to target specific audiences with tailored messages.
Embed from Getty Images
“However, it is essential to verify AI-generated content, as there is a significant risk of misinformation, disinformation and clickbait content. Our guidance should help members to use generative AI in a responsible and ethical way.”
The updated guidance advises practitioners that even though platforms such as LinkedIn, TikTok, Instagram, and Facebook have begun automatically labelling AI-generated material, other sites such as YouTube often depend on users to make manual disclosures about the origin of their content.
The Law Society’s note cautions that this creates uneven transparency, meaning that legal professionals could inadvertently engage with or share misleading posts that appear authentic.
“Just like posts by humans, it is important to check the veracity of AI-generated content,” the guidance states. “There may be a risk of misinformation or disinformation, as well as content that has been designed to attract attention for engagement (commonly known as ‘clickbait’).”
The Society has also warned solicitors that information they share online can be used to train AI systems. The guidance highlights that, beginning next month, LinkedIn will use personal data and content shared by UK members to train its AI models.
“You should check the terms of service for any social media platform you use to see what its AI training policies are,” the document advises. “You should also check whether there are options to opt out of AI training if you wish to do so.”
The update follows wider concerns across the legal profession about the reliability and ethical use of AI. Earlier this year, both barristers and solicitors were reminded by professional bodies of the need to uphold public confidence in the justice system and avoid “gratuitous attacks” or unverified claims online.
AI’s growing influence on social media has raised questions about data protection, consent, and the accuracy of public legal commentary, as generative models are increasingly used to create text, video, and image content that may appear genuine but lack factual grounding.
The Law Society’s guidance emphasises that solicitors remain personally responsible for the material they publish or share online, even when automated systems are involved. Posts that misrepresent facts, breach confidentiality, or undermine professional standards may result in regulatory action by the SRA.
The document further warns that AI-generated imagery, including deepfakes, presents a new layer of reputational risk for professionals whose likeness or voice could be synthetically replicated. The Society advises firms to establish internal protocols for verifying the sources of online media and to train staff on identifying manipulated content.
Evans said the Society’s objective was not to discourage social media use but to encourage digital literacy and ethical awareness among legal professionals.
“Social media can be a powerful way to connect with clients and the public,” he said, “but the legal profession must stay vigilant in ensuring the information it circulates is accurate, responsible, and compliant with professional duties.”
The updated guidance, which applies to solicitors, firms, and legal executives, forms part of the Society’s broader effort to modernise ethical standards for emerging technologies. It follows a recent series of consultations on AI and digital law and is expected to be reviewed regularly as platforms and policies evolve.