Incorporating machine-learning into the criminal justice system may have ‘unintended or indirect consequences that are difficult to anticipate’, according to academics calling for measures to regulate AI. 

A study published by the Royal United Services Institute and the Centre for Information Rights, University of Winchester, says that while machine-learning algorithms in policing are in their infancy, there is potential for the technology to do more: ‘The lack of a regulatory and governance framework for its use is concerning.’ 

Several forces are testing or considering tests of algorithm-based artificial intelligence systems to identify crime hot-spots or to predict the likelihood of re-offending. Such initiatives have prompted concerns about the quality of data with which machine learning systems are ‘trained’, and whether their decision-making process will be open to challenge. 

The report argues that ‘it is essential that such experimental innovation is conducted within the bounds of a clear policy framework, and that there are sufficient regulatory and oversight mechanisms in place to ensure fair and legal use of technologies within a live policing environment’.

It recommends that the Home Office sets out clear and appropriate constraints governing how police forces should conduct trials of predictive policing tools. 

The next public session of the Law Society’s Technology and the Law Policy Commission – Algorithms in the Justice System, will take place on 12 November.
Click here for more details