Incorporating machine learning into the criminal justice system may have ‘unintended or indirect consequences that are difficult to anticipate’, according to academics calling for measures to regulate computerised decision making in policing. 

A study published today by thinktank the Royal United Services Institute and the Centre for Information Rights, University of Winchester, says that while machine learning algorithms in policing are in their infancy, there is potential for the technology to do much more. ’The lack of a regulatory and governance framework for its use is concerning.’

Issues include a lack of transparency and the choice of data to train artificial intelligence systems, the report notes. Reliance on police arrest data, for example, is 'particularly problematic' as it may reflect the fact that a particular neighbourhood - or racial group - has been disproportionately targeted by police in the past. If that data then informs systems that predict future crimes, it can create a feedback loop 'whereby the predicted outcome simply becomes a self-fulfilling prophecy'. 

The report argues that ‘it is essential that such experimental innovation is conducted within the bounds of a clear policy framework, and that there are sufficient regulatory and oversight mechanisms in place to ensure fair and legal use of technologies within a live policing environment.’

It recommends that the Home Office develop codes of practice outlining clear and appropriate constraints governing how police forces should trial predictive policing tools. Such  trials must  be comprehensively and independently evaluated before moving ahead with large-scale deployment.