In a recent paper, a number of academics who specialise in lawtech from the UK, the US, Canada, Australia and Singapore jointly warned of the dangers to the rule of law posed by the use of artificial intelligence by the courts, tribunals and judiciary.

We conclude that the introduction of new technology needs to be controlled by the judiciary, to maintain public confidence in the legal system and rule of law. There is clear scope for the judiciary to use emerging technology to support their decision making and to create efficiency savings, which in turn can promote access to justice. Claims that algorithmic decision-making is ‘better’ in terms of reduced bias and increased transparency risks erosion of the principle that legal decisions should be made by humans.

Our paper breaks down the trial process into a number of parts: litigation advice, trial preparation, judicial guidance, pretrial negotiations, digital courts/tribunals and judicial algorithms. It explains the core technology and explains how the main risks are around the provision of a validated and accurate dataset, transparency and bias.

The paper considers advantages that can be identified, for example access to justice specifically with reference to the speed of preparation and reduction of the court backlog, fairness where there is great potential to ‘level the playing field’, especially for Self-Representing Litigants [SRFs] and audit, in particular the identification of any deepfake legal submissions and judicial bias. For example, in sentencing, a use case in a law tutorial group found that the Chat GPT produced a result which was closer to the judge’s decision than the students’.

It then goes on to look at the disadvantages. Public confidence could be eroded if judicial decisions are made by algorithms, AI litigation advice systems may mean even fewer civil cases will come to trial, impacting the evolution of common law, the risk that algorithmic judicial decisions will be wrong/unfair and may require an automatic right of appeal in every case to a human judge and regulation as AI algorithms currently are not subject to professional auditing and regulation.

In conclusion we make recommendations on JudicialTech innovation. The paper recognises that there is clear scope for the judiciary to use emerging technology to support their decision making and to create efficiency savings, which in turn can promote access to justice. Online dispute resolution is ideal to pilot this technology where the parties’ consent has been obtained, and much work is already being undertaken with various AI systems, mainly around access to justice.

  • Knowledge-transfer – raising awareness amongst stakeholders of JudicialTech AI and emerging technologies. This might involve workshops, or a web portal presenting JudicialTech products.
  • Experimentation – working with universities and startups to develop JudicialTech proof of concept (POC) systems. This can be a great source of research projects for both law and technology students.
  • Predictive analytics – the use of AI algorithms to analyse massive amounts of information covering litigation advice, trial preparation and judicial analytics.
  • Sandboxes – provide a JudicialTech testing environment where new or untested technologies and software can be trialed and monitored securely.
  • Tech sprints – essentially hackathons, coding events that bring programmers and other interested people together to drive innovations.
  • Horizon scanning – detecting early signs of potentially important developments through a systematic examination of potential threats and opportunities, with emphasis on new technologies.

The principal recommendation of the paper is that to protect the rule of law, there should be a presumption against the use of judicial decision-making algorithms in conventional criminal and civil litigation - unless the technology has completed a robust appraisal and testing regime which must be supervised by the judiciary.

The issue of auditing AI systems is rapidly becoming of great importance to companies who recognise that their systems may greatly affect people’s lives. Attempts have already been made to insist on independent auditing of AI systems by compute scientists who would impose their own definitions of bias. The authors are clear that if confidence in the rule of law is to be maintained, it is essential that the judiciary plays a proactive role in the use of the technology in their trials.

The paper can be downloaded here.

 

Jeremy Barnett is a practising barrister specialising in fraud and regulatory law. He is honorary professor of algorithmic regulation at University College London.

Fredric Lederer is a chancellor professor of law and director of the Center for Legal and Court Technology and Legal Skills at William & Mary Law School. He is a former prosecutor, defence counsel, trial judge, and court reform expert, a pioneer of virtual courts.

Philip Treleaven is UCL professor of computer. Twenty-five years ago his research group developed much of the early financial fraud detection technology and built the first insider dealing detection system for the London Stock Exchange. (Treleaven is credited with coining the term RegTech.)

Nicholas Vermeys is professor at the Université de Montréal, the director of the Centre de recherche en droit public (CRDP), associate director of the Cyberjustice Laboratory, and a member of the Quebec Bar.

John Zeleznikow is a professor of law and technology at La Trobe University in Australia. He has pioneered the use of machine learning and game theory to enhance legal decision making.

Topics