Should legal constraints be introduced to control the usage of algorithms and AI in the justice system? The question is a key one for the Law Society Public Policy Technology and Law Commission investigating the use of algorithms.

Roger Bickerstaff, partner, Bird & Bird

Roger Bickerstaff

My general view is that more constraints are justified in the justice area than, for example, consumer retail, due to the potentially serious negative consequences. That being said, an overly restrictive legal framework is likely to stifle development, innovation and usage resulting in could mean that the effectiveness and efficiency benefits promised by artificial intelligence being delayed or not achieved. 

There are some attractions to the argument for leaving this area to common law scrutiny so that the courts can intepret on the basis of existing laws. However this would lead to uncertainty in the legal framework and a piecemeal approach. Instead, a ‘light touch’ regulatory framework could be introduced which would enable innovative solutions to be developed and implemented whilst giving sufficient protection for problems to be tackled. 

The key figures of a light touch regulatory framework for the justice system could include: 

  • Registration/notification. A central public register of AI/algorithmic solutions used in the justice system held by an independent body. This would allow for public scrutiny of the usage of these solutions. Public scrutiny would put pressure on developers to give greater consideration to the impact and consequences of their products. The public entities operating the systems would also not be shielded by confidentiality. In some areas of course national security considerations would need to be taken into account. 
  • Transparency obligations. The Data Protection Act 2018 contains obligations to data subjects. The extent to which these rights are available under the law enforcement elements of the act needs careful review. In addition, consideration should be given to the extension of data subject transparency obligations relating to the logic and operation of AI/algorithmic systems to allow for public scrutiny and review. Clearly there are IP and proprietary right considerations. However the level of transparency that is important for public scrutiny may not necessarily involve the disclosure of specific IP rights. Consideration should also be given to greater transparency over the data sets that are used to ‘drive’ the AI. Much of the bias that occurs in the use of these solutions is derived from these data sets. 
  • Legal remedies. A review should be carried out to determine if changes to the law are needed in order to give citizens effective legal remedies if problems arise through the use of AI/algorithmic solutions in the justice system. At its most basic level, this review should assess whether there are effective remedies for the protection of human rights and civil liberties that could be impinged by the use of these solutions. For example, what rights would an individual have who is arrested or subject to some other form of police action as a result of the output of an AI/algorithmic solution? Would the Equality Act 2010 would apply if actions are taken by justice system entities based on the outputs of AI/algorithmic solutions that infringe a protected characteristic identified by the act? If not, could the act be modified so that these actions would fall within its protections?  

Importantly, this ‘light touch’ regulatory framework would not include the prior approval of AI/algorithmic solutions before deployment in the justice system. Such prior approval would stultify development - the time and costs would be considerable and the skill-sets of any approval body are likely to run well behind those of commercial developers.

Roger Bickerstaff is a partner and Bird & Bird. This is an edited extract from his submission to the first hearing of the Law Society Public Policy Technology and Law Commission - algorithms in the justice system.