Michaelcross1 660

Michael Cross

Human rights lobby group Liberty generally gets its voice heard in influential places. Its latest report Policing by Machine, which urges an end to so-called 'predictive policing', was barely off the press before it was endorsed by the Guardian newspaper. 'Machines can make human misjudgments very much worse,' intoned the voice of liberal Britain. 'And should never be trusted with criminal justice.'

'Never' is a strong word. Pace the Guardian, before we reach for it, we should look at what Liberty's report actually discovered and whether the implications are sufficient to justify extreme measures.

The picture is not quite as alarming as it first appears. 

From Liberty's headline finding that 'at least 14 police forces in the UK are currently using predictive policing programs, have previously used them or are engaged in relevant research or trials' we might assume that Robocops are deployed as a matter of routine. 'Police forces across the UK are using predictive policing programs to predict where and when crime will happen - and even who will commit it,' the opening words state. 

Not quite. Only three of the projects identified by Liberty involve predicting individual offenders' behaviour and at least two, at Durham and West Midlands, are purely at the experimental stage. (A third force, Avon and Somerset, gets a deserved ticking off for being cagey about what it is up to.) The rest of the projects involve mapping crimes, with levels of sophistication from straightforward deployments of commercial programs such as MapInfo (West Midlands) to a pilot project at West Yorkshire to identify 'areas of vulnerability'. 

Few would object to police forces making maps of where their services are required. The controversy is the use of that data to predict future deployments. 

Liberty is right to point out the dangers of relying purely on historical data when training algorithms to make decisions. The nightmare scenario is an algorithm deciding that history shows criminals are likely to have certain names, certain physionomological attributes and live in certain postcodes. Police resources are prioritised accordingly; heightened policing leads to more arrests of the usual suspects, reinforcing the algorithm's assessments of what sort of people should be hauled in. 

Before we know it, this learning loop has refined Robocop's view of the world to that of the dim racist PC in the classic Rowan Atkinson sketch, nicking individuals for being in possession of curly black hair and thick lips. Except that algorithmic racism will never be that overt: impenetrable black boxes will be impossible to hold to account. Yet, susceptible as we are to 'automation bias' we will blindly accept their conclusions.  

But let's hang on for a moment. Certainly, there is evidence from the US showing AI systems emulating human bias. But does this justify the assertion that machine learning should never be trusted? Remember one of the reasons we are trialling - repeat, trialling - this technology is because we know that humans' decisions may be flawed. AI, suitably designed and trained on suitable data, at least offers the possibility of overcoming present prejudices. This is one of the aims of Durham Constabulary's much-maligned experiment with its HART programme for selecting offenders eligible for alternatives to prosecution. It will be fascinating to see the results, which the police force has undertaken to publish openly in peer-reviewed journals. 

We should be encouraging such experiments, not calling for their ban. 

The 'never' brigade makes two questionable assumptions. One is that data sets chosen to train predictive policing algorithms must inevitably be restricted to flawed historical records. The whole point of 'big data' is to get beyond this, to take in a wider range of sources, which can be weighted accordingly. Obviously the methodology must be transparent, another good point made by Liberty. 

The second assumption is that 'automation bias' will always win out. Liberty claims that, even when allowed to override the computers, policy officers will tend to play safe by following the machine's recommendations. But this is not inevitable: if evaluations of predictive policing point to systemic bias, the justice system could put in place meta rules about when it is appropriate to agree with the machine.

There is an analogy, admittedly an imperfect one, in motor insurance. We have ample big data showing that women are safer drivers than men, but society has ruled that it is unfair to price individual premiums accordingly. Why not apply similar constraints to the application of big data in the justice system? Indeed Liberty suggests that we could focus on the development of programs and algorithms that actively reduce biased approaches to policing. 

A final caution. Liberty's report is the latest of several in recent months to ring alarm bells about AI in criminal justice. (It's nearly two years since I first attempted to raise questions about 'Schrödinger’s justice' in the Gazette.) A bandwagon is rolling here, one I have seen previously in controversies over new technologies, where opposition becomes the bien-pensante's default position. Our fondness for precautionary principles is already doing immense economic and environmental harm. It would be a shocking waste of the UK's comparative advantage in AI technology if 'algorithm' became a boo-word like 'nuclear', 'genetic' or 'fracking'. 

Let's be wary of bandwagons, even those rolled by Liberty and its house journal. 

 

The Law Society’s policy commission on algorithms in the justice system will hold a final public evidence session on Thursday.