A code of practice rather than a new law may be the best way to oversee police forces' use of machine learning in policing, academic specialists have suggested. This is one of a number of proposals to emerge from a workshop on the state of machine learning algorithms in policing conducted by the University of Winchester's Centre for Information Rights.

The report of the workshop concludes that full transparency should be prerequisite for the use of machine learning - popularly known as artificial intelligence - in criminal justice. This should include a full data protection impact assessment carried out at an early stage of the project's planning.

The report appears as international concern mounts over the application of artificial intelligence to law enforcement. The incoming president of the European Commission,  Ursula von der Leyen, has said she will unveil legislation within her first 100 days in office to provide a ‘co-ordinated European approach on the human and ethical implications of artificial intelligence’.

Earlier this summer, the Law Society's public policy commission on algorithms in the criminal justice system recommended measures to improve transparency and accountability. 

However contributors to the Winchester workshop warned that regulating algorithms will not be a simple matter. 'The quest for algorithmic regulation might lead us towards "reinventing the wheel" whilst there seems to be an adequate arsenal of legal principles already,' the report states. 'The problem is not the absence of law but its multi-diversity and lack of specificity.'