The UK approach to AI regulation – favouring sector-led oversight over statutory controls – combined with the rapid adoption of AI in legal practice, has highlighted areas where regulatory and judicial frameworks will likely evolve further. Recent cases such as Ayinde v London Borough of Haringey and Al-Haroun v Qatar National Bank (which were heard together by the Administrative Court) have highlighted the risks of unverified AI outputs and the court’s need to adapt existing legal principles to AI-related disputes. The judiciary has responded with guidance, placing greater responsibility on all legal professionals to verify AI-generated content and maintain ethical standards.   

John McElroy

John McElroy

Kurt Shead

Kurt Shead

UK approach to AI in litigation

The UK has not enacted a comprehensive AI statute, relying instead on sectoral oversight and general legal principles. This has left courts and practitioners grappling with managing AI as it becomes more prevalent in legal research, drafting and evidence management. The government’s AI Opportunities Action Plan and updated judicial guidance reflect an evolving stance, emphasising responsible AI adoption while safeguarding the justice system’s integrity.

Judicial critique and case law: AI hallucinations 

The Ayinde and Al-Haroun judgment marks one of the first times the Divisional Court has been required to directly address AI misuse by legal professionals. Both cases were referred to the Divisional Court under the court’s Hamid jurisdiction, which relates to the court’s inherent power to regulate its own procedures and to enforce duties that lawyers owe to the court. These powers allow the court to summon legal professionals to explain their actions, and even refer them to the Solicitors Regulation Authority or the Bar Standards Board for misconduct.

In her judgment, the president of the King’s Bench Division, Dame Victoria Sharp, spoke of the absolute necessity of ensuring that AI-generated content is properly verified. She warned of ‘serious implications for the administration of justice and public confidence in the justice system if artificial intelligence is misused. In those circumstances, practical and effective measures must now be taken by those… with individual leadership responsibilities’. The court criticised the submission of pleadings containing fabricated citations produced by generative AI, holding that such conduct could amount to professional misconduct and result in wasted costs orders or regulatory referrals. 

Commenting upon the lack of due diligence on display, Sharp explained that she found it ‘extraordinary that the lawyer was relying on the client for the accuracy of their legal research, rather than the other way around’. The court reiterated that legal representatives are personally responsible for all material submitted, regardless of its source, and warned that failure to verify AI-generated research could constitute negligence. 

Although the above cases are two very recent examples of the English court taking a hardline stand in addressing AI-related issues in the legal sector, the challenges posed by AI are not wholly new. AI-generated citations have previously been misused, demonstrating a longstanding concern about the accuracy and reliability of AI-generated legal content. 

Judicial guidance: the new direction

The recent judicial guidance, published in October, replaced an earlier version by adding to the glossary of common AI terminology, and expanding on the risk of bias in training data and AI hallucinations. Moreover, the guidance makes clear that legal professionals are responsible for any AI-generated material submitted to the court. AI tools are not a substitute for legal reasoning or authoritative research, and their outputs must be independently verified. Confidential or privileged information should never be entered into public AI platforms, to avoid breaches of data protection.

Judges may also question the source and verification of AI-generated submissions, particularly from unrepresented litigants. The guidance warns of risks such as bias and misinformation, and encourages the use of secure, approved AI tools within the judiciary. 

It is therefore critical that UK solicitors review and understand the guidance. The following points are worth keeping in mind: 

  • Verification: All AI-assisted research, drafting and evidence must be checked for accuracy. Failure to do so risks wasted costs and reputational harm.
  • Ethical duties: The SRA Principles require solicitors to act in a client’s best interests and to act in a way which upholds public trust and confidence in their profession, which now extends to understanding the limitations and risks of AI tools. 
  • Internal governance: Law firms should implement protocols for AI use, including training, oversight and policies on consumer AI platforms, to protect confidentiality and privilege.
  • Regulatory engagement: Active participation in consultations with the Law Society, ICO and FCA will help shape emerging standards and anticipate compliance expectations. 

Conclusion

The UK’s approach to AI in litigation is marked by judicial caution, increased professional responsibility and soft law guidance. Recent cases have exposed the dangers of unverified AI outputs and the need for legal professionals to adapt. For solicitors, this means carefully checking documents that include AI-generated content, staying alert to ethical considerations and actively managing potential risks. As the UK’s AI Regulation Bill has been delayed, pressure continues to mount for statutory intervention. This year will therefore be pivotal in shaping a coherent framework for AI in UK legal practice and this updated guidance is welcome. 

 

John McElroy is vice-president of the London Solicitors Litigation Association and a partner at Fieldfisher. Kurt Shead is a solicitor at Fieldfisher