Earlier this month, the case of Felicity Harber, who was found to have relied on fake legal authorities generated by ‘an AI system, such as Chat GPT’ in her appeal before the First Tier Tax Tribunal, was widely reported. Harber’s case is the latest example of an unrepresented litigant relying on inaccurate AI generated material – in May this year reports surfaced of a similar incident in Manchester County Court.

Dr Natalie Byrom, legal education foundation

Dr Natalie Byrom

Source: Michael Cross

Despite these examples, which have drawn attention to the risks posed by unrepresented litigants relying on information provided by data driven tools, the response from policymakers, regulators and the senior judiciary has been muted. Judicial guidance on generative AI published this month seemed to confirm that the courts in England and Wales will not be following other jurisdictions in requiring parties to declare where AI has been used in preparing materials that are submitted to the court. Legal regulators, appearing before the Justice Select Committee this month also appeared to be relatively sanguine about these developments - whilst the Solicitors Regulation Authority alluded to the 'shifting set of plates' created by advancements in technology, they nevertheless declared the Legal Services Act 2007 'absolutely fit for purpose'. However, research I conducted earlier this year in support of the Law Society’s 21st Century Justice Project suggests that the current regulatory landscape is poorly equipped to protect people who turn to data driven tools in the absence of legal advice.

The Legal Services Board’s response to the government’s AI white paper earlier this year confirmed that the focus of the current framework on legal activities and professional titles means that many technologies and products will be excluded from regulation under the Legal Services Act 2007. The lack of urgency embodied by the current response appears to be driven by the notion that the primary users of data and AI driven tools will continue to be lawyers and legal professionals, who will incorporate insights from these products into the delivery of their existing regulated services to clients. However, the examples provided above indicate that a combination of widespread unmet legal need and increasingly accessible, user-friendly products may result in people turning to and relying on information provided by companies and tools that fall outside existing legal services regulation – exposing the most vulnerable litigants to harm that cannot adequately be compensated for by existing consumer protection or data protection law.

Further evidence that these products are attractive to people who cannot afford professional legal advice has been provided by researchers at the Stanford Legal Design Lab - who have led work to understand how people are using generative AI tools such as ChatGPT and Google Bard when facing common legal problems such as eviction, debt and divorce. Their research has found strong consumer appetite for using these tools that only increases once people have tried them. The free, instantaneous, cleanly formatted and seemingly authoritative nature of the information provided is enough to command confidence – even when the information proves not to be accurate. The findings of this research have led the study’s author, professor Margaret Hagan, to conclude that when it comes to people using these tools to resolve legal problems, 'the genie is out of the bottle'.

This reported public appetite for low-cost data-driven tools is especially concerning in the absence of agreed standards for assessing and comparing their accuracy and reliability. Research into the performance of case outcome predictive technologies - tools that claim to forecast the outcome of a civil litigation event or case and therefore help to guide litigation strategy - offers a stark example of this. These tools have attracted significant attention in the legal press and amongst policymakers due to their potential to augment or replace lawyers' expertise.

However, a recent review led by Dr Masha Medvedeva and Dr Pauline McBride, of 171 papers that claimed to be reporting on tools capable of predicting court judgments based on the facts available to parties prior to litigation, found that in only 7% of these papers the tools were doing what they set out to do. The remainder were predicting outcomes based on the facts contained in the text of the judgment - which would not necessarily be available to the parties prior to commencing proceedings. Medvedeva and McBride conclude that 'a lack of attention to the identity and needs of end-users has fostered the misconception that legal judgment prediction is a near solved challenge, suitable for practical application'.

Given these challenges, there is an urgent need to develop a proportionate policy and regulatory response, to ensure that advances in data-driven technology can be harnessed to address rather than exacerbate, existing access to justice challenges.

As a first step, better data is needed about who is producing data-driven tools, where they are being used, their performance and their impact. This may require changes to the procedural rules of the kind made in other jurisdictions - to ensure that we have a record of which tools are being used to prepare materials submitted in proceedings, who is using them and their impact on case outcomes.

Finally, the remit of the Legal Services Act 2007 must be re-examined. The regulatory objectives to promote access to justice, uphold the rule of law and protect consumers are well placed to address the potential kinds of harm, both collective and individual, that are generated by an expanded role for data driven tools as a substitute for access to legal advice, but the structure of the present framework means that consumers are currently denied access to appropriate redress. The present situation also means that companies producing these tools are unable to access specialist regulation and guidance that could help them to design better products and build confidence in them.

Far from being 'fit for purpose', current arrangements serve only to create uncertainty - undermining the kinds of responsible innovation the justice system badly needs and that regulators and policymakers wish to see.

 

Dr Natalie Byrom is an honorary senior research fellow at UCL Laws

Topics