Recent research from academics at Stanford has raised concerns about the performance of consumer facing AI tools, like ChatGPT, when used by people to access legal information and advice. A study published by the Stanford GovReg Lab found that the risks of using products based on large language models that have not been specifically adapted for legal research were especially high for litigants in lower courts, those who are formulating questions based on incorrect premises or people who are unable to assess the reliability of the responses they receive. 

Dr Natalie Byrom

Dr Natalie Byrom

Unsurprisingly therefore, the study concluded that: ’the users who would benefit the most from large language models are precisely those who the large language models are the least well equipped to serve'. Despite this warning, there is evidence that consumer appetite for these tools is growing - particularly amongst those who cannot afford to access traditional forms of legal advice.

Tools such as ChatGPT, built on large language models, offer crucial benefits over traditional forms of online and digital advice, most notably because the people who use them do not need to define or describe their problem in legal terms to access credible-seeming information. The outputs produced by these tools are also currently free to access, delivered in plain language and cleanly formatted. As such, it is unsurprising that products like ChatGPT present as an attractive and credible 'front door' to the justice system, and that they are being used by people to understand their rights and plan next steps. However, if the information they provide is wrong, what looks like a front door, may in fact be a brick wall, or a cliff edge.

These developments are taking place in parallel to the announcement of proposals from the lord chancellor, lord chief justice, and senior president of tribunals that herald an increased role for the private sector in delivering digital and online legal information and advice. Their joint vision for the future of the civil and family courts and tribunals, published last November, signals a significant break from proposals of the past which advocated for state funded and designed digital legal information, advice and triage as part of the HMCTS reform programme.

In December, deputy head of civil justice Lord Justice Colin Birss, elaborating on this vision, described the future digital justice system as a: 'public private partnership' that would signal a move 'away from state controlled centralised systems'. As part of this vision, centrally imposed data standards would support the seamless transfer of client data between private sector digital information, advice and dispute resolution providers, and eventually onto the courts for those cases where earlier resolution has not been achieved. One of the key benefits of this approach, he argued, was that it would eliminate the need for a state funded and hosted ‘single point of entry’ to the courts.

But if the state is no longer providing the front door to the justice system, how do we protect the public, particularly when the companies who are providing this service instead operate outside the remit of existing legal services regulation?

The Law Society is considering some of these questions as part of its 21st Century Justice project. A workshop last month brought together experts in digital rights, LawTech, access to justice and civil law to explore options for responding to developments. The message from participants was clear: maintaining the status quo is simply not an option. One attendee remarked that at this point, doing nothing was 'insanity' from the point of view of protecting consumers. Participants also stated that setting clear, predictable rules upfront would benefit industry. Far from undermining innovation, attendees with a background in LawTech observed that the current absence of regulatory and quality standards makes it harder for small and medium sized entrants to the market to raise funding and develop new services. The approach of 'watch and wait' in relation to the regulation of tools developed by companies not covered by the Legal Service Act 2007 was felt to afford an advantage to big players, at the expense of their smaller counterparts. Delays also undermine the prospect of regulating effectively to promote competition and protect the rights of consumers. As one participant remarked, regulating a mature market is a much more difficult and expensive task than setting clear standards and rules at the outset.

At the time of writing, the Legal Services Board is considering its response to the government’s AI white paper, and several pieces of potentially relevant legislation – including the Digital Markets, Competition and Consumers Bill and the Data Protection and Digital Information Bill - are making their way through the House of Lords. The Competition and Markets Authority is developing and consulting on principles published in September 2023 to guide competitive AI markets and protect consumers. Taken together, it is entirely unclear whether these developments are equipped to tackle the issues at hand. As the Law Society moves towards publication of an updated version of its Green Paper later this spring I hope that it will harness this opportunity to amplify the messages from the workshop, and lend its support to calls for urgent action to protect consumers.

 

Dr Natalie Byrom is an honorary senior research fellow at UCL Laws

Topics