Artificial intelligence is likely to have a continuing and important role in the conduct of litigation in the future, Dame Victoria Sharp (pictured below) said this month. The president of the King’s Bench division and Mr Justice Johnson were dealing with cases in which lawyers had apparently put AI-generated fake citations before the courts. 

Joshua Rozenberg

Joshua Rozenberg

Hence the two judges’ warning: AI carries risks as well as opportunities. And those risks were all too clear: in one of the two cases it had examined, the court concluded that the threshold for contempt proceedings had been met. But what do judges see as the opportunities AI will bring?

We gained a rare insight into the judiciary’s views with the publication this week of new peer-reviewed research. It was based on a series of focus groups held at the UK Supreme Court, the Royal Courts of Justice in London and online. Perhaps the most striking aspect was that serving judges were willing to express their views. Indeed, one of the researchers told me, they seemed eager to take part.

Dame Victoria Sharp

Dame Victoria Sharp

Five justices of the Supreme Court were among them. Participants from England and Wales included a county court circuit judge, five High Court judges and one from the Court of Appeal. They were all guaranteed anonymity so we cannot be sure that the appeal judge was Sir Geoffrey Vos, who as head of civil justice is the judicial lead on AI. 

The research – thought to be the first of its kind worldwide – was conducted in January by Erin Solovey, Brian Flanagan, and Daniel Chen. All three are based at universities abroad but they came to London because, within the English-speaking world, UK judges have led the way in considering the risks and opportunities of using AI in the courts.

The researchers were more interested in gathering evidence than making recommendations. ‘By engaging directly with judges,’ they conclude, ‘the study highlights the complexities of judicial processes and helps to elucidate an approach that can enhance human-AI collaboration and prepares the judiciary for a future shaped by AI technologies.’ 

Uppermost in the minds of most participants was that they were personally liable for every judgment that went on in their name. That reflects judicial guidance on AI published last December. 

There was also a clear understanding that justice is rooted in human decision-making – that AI could make logical decisions but only humans could identify situations where a line of reasoning would lead to an unjust decision. Another aspect that AI could not easily replicate was the cathartic effect of a human decision-maker listening to a party and reaching a decision, even if that party then lost.

But there seemed to be different views over how much help AI could be in drafting. Senior judges took pride in the way they wrote their judgments, choosing every word with care. By contrast, the researchers thought that ‘for courts of first instance, which, unlike appellate courts, also perform the function of resolving disagreement over a case’s facts, AI support for opinion drafting may make more sense, particularly to improve efficiency.’

But that ignored the human factor, another judge suggested in a comment. ‘One of the most important functions that first-instance judges do is find the facts. And you couldn’t rely on AI to set them out for you, because it may depend on who’s telling the truth… But when it comes on appeal, it is extremely tedious to set out the facts.’

AI was certainly seen as a source of opportunities and efficiencies. Once a judge had written a judgment, AI could create an easy-read version or a podcast. Though sentencing is an art, AI could produce recommendations based on precedents and guidelines. 

One focus group member thought AI could help resolve many of the 1.5 million small money claims issued each year, most of which never reached a judge. Another agreed: ‘AI seems to be one way in which western countries could enhance access to justice because many people are never going to be able to afford a lawyer.’

But it could not be trusted to research and report precedents. ‘It clearly knew the case in which the answer was to be found,’ a judge acknowledged. ‘But it summarised that case completely contrary to what the decision was. And it gave the opposite answer in language which it had obviously taken from the case.’ 

Judges expressed concerns about privacy, misconceptions drawn from inputs by other users and the risks of AI systems trained on US law. But perhaps the most chilling concern identified in the focus groups was the danger of deskilling: if judges relied solely on AI-generated summaries, they would lose the ability to read and understand source documents.

Quite right. Perhaps I should reassure readers that this analysis of a fascinating academic paper is entirely my own work.

 

joshua@rozenberg.net

Topics