The Law Society has recently launched a Public Policy Technology and Law Commission under the chairmanship of its incoming president, which will focus for a year on the use of algorithms in the justice system, a hot topic. This is to be welcomed.

Jonathan Goldsmith

Jonathan Goldsmith

The text accompanying the launch said that the Commission will focus on England and Wales, but take appropriate account of international developments. I am not sure I agree with those priorities, because much excellent technical work on the impact of algorithms on justice has already taken place beyond our borders – yes, really – and it would be foolish for time to be wasted reinventing the wheel. The international work should be looked at first, to see whether there are gaps that need filling for application to our jurisdiction, before launching our own work.

For instance, the EU Fundamental Rights Agency (FRA) published a report just last month, drawing attention to the fact that when algorithms are used for decision-making, there is potential for a breach of the principle of non-discrimination enshrined in Article 21 of the EU Charter of Fundamental Rights. Its paper explains how such discrimination occurs, and suggests possible solutions.

The FRA report is based on previous work by the Council of Europe (‘Guidelines on the protection of individuals with regard to the processing of personal data in a world of Big Data’) and the European Parliament (Resolution on fundamental rights implications of big data: privacy, data protection, non-discrimination, security and law-enforcement).

The FRA recommends a number of steps to be taken, with which the Law Society Commission could sensibly start its work:

  • authorities should be as transparent as possible about how algorithms are built
  • fundamental rights impact assessments should be conducted to identify potential biases and abuses in the application of, and output from, algorithms
  • the quality of data should be checked, including collecting metadata i.e. information about the data itself
  • authorities should ensure that the way the algorithm is built and operates can be meaningfully explained – including, most importantly, which data were used to create the algorithm - to facilitate access to remedies for people who challenge data-supported decisions

In addition, in another part of the Council of Europe, a working group of the European Commission for the Efficiency of Justice (CEPEJ) is working towards issuing guidelines on ‘The challenges of the use of artificial intelligence algorithms in judicial systems’, with research from several countries, including the UK.

The UK research includes, first, work by the University College of London (UCL) on predicting judicial decisions of the European Court of Human Rights – which I assume the Law Society Commission will know about, since one of its commissioners is also a UCL professor – and, second, the Harm Assessment Risk Tool (HART) developed with Cambridge University, and now being tested in the UK. HART involves automatic learning from five years of Durham police archives, to assess the risk posed by suspects, based on about thirty factors, some unrelated to the crime committed, such as post code and gender.

CEPEJ’s work is only at a tentative stage, but still presents interesting insights:

  • even in civil justice, predictive algorithms may have a negative impact, this time on the right to a lawyer, because lawyers may refuse to take on cases which are predicted by the algorithm to fail
  • there needs to be a public debate before AI is introduced into the justice system, to ensure that the difficulties and controversies are properly aired among appropriate stakeholders
  • testing, scrutiny and evaluation, both before and after the introduction of algorithms, are essential
  • is it possible to say that AI can ever represent life in all its complexity?

Finally, the International Bar Association produced last year a comprehensive study into all the academic research undertaken to date on the future of legal services, including much research investigating the impact of AI.

Around the time of the launch of the Law Society’s new Commission, the Law Society also published a new horizon scanning report on artificial intelligence (AI) and the legal profession. This new report goes wider than the use of algorithms, because it looks at all aspects of AI, including those which replicate the structure of the human brain.

It is an excellent study, but it also does not build sufficiently on what is happening in Europe and the rest of the world. For instance, there are a few sentences on bias under the heading of transparency, but not the emphasis which one of the gravest accusations against AI deserves, that of discrimination against certain groups – which is outlined fully in the FRA paper.

The establishment of the new ‘technology and the law’ Commission is timely and important. My wish is that it uses its resources appropriately by looking first at the wealth of other material on the topic.