In techie circles, a dispute has been raging about whether a conversational chatbot developed by Google is as sentient as it apparently claims to be. We can safely ignore that controversy here. We cannot, however, ignore mounting public unease about the uncanny ability of modern artificial intelligence - or more usually, machine learning and natural language processing - to replicate the real thing. 

The ethical issues raised by the use of AI are familiar. A pressing one in the legal sphere is what happens when systems that predict the outcome of litigation become so accurate that they in effect replace the proceedings themselves? It is no answer to say that the artificial intelligence will never be that good, that humans will always be able to fool a 'Turing test' - by, to take one example, luring it into pontificating on the world record for walking solo across the English channel.

But AI doesn't need to be perfect to be an attractive option, especially when cost is concerned. Developers of 'robojudge' software are coy about their work - in the UK, at least - but we are well on the way to 'good enough' systems capable of handling most mainstream commercial disputes. That would certainly align with the master of the rolls' vision of only the most complex disputes ending up in the courts system. Along, presumably, with cases brought by obsessive litigants in person and egotistical billionaires.

Needless to say, most people, and not just lawyers, find the idea disturbing. Earlier this year a global survey found Britons to be among the most sceptical about AI, with only 35% saying they trust a company using AI as much as they trust a company that does not. Concerns about being subject to automated decision-making also stood out in responses to the government's consultation on reform of data protection law. People are especially suspicious when it comes to the justice system: the Social Market Foundation's Future Proofing Justice survey published last month found only 24% supported the use of AI in the civil courts; the figure for criminal courts was lower. 

Many of these suspicions are valid. Machine learning systems are only as good as the data on which they are trained, and here many shortcomings have come to light in recent years. The system will be flawed if data is used out of context or obtained by intrusive methods or which simply reflect past injustices. Another risk is feedback loops: a system which predicts crime in a particular area will lead to more police being deployed to that area, picking up more crimes.

To return to the robojudge example, it would be absurd to train a general purpose dispute resolution system on the tiny subset of disputes that actually come to court. Especially if they are disproportionately brought by obsessives and billionaires. 

Nearly everyone agrees that artificial intelligence needs a governance regime that keeps it under control without stifling innovation. This is the aim of a white paper currently being drawn up under the government's 'National AI Strategy'. The danger here, as pointed out by industry body TechUK, is that aspects of the new regime will replicate or contradict existing regulation. 

We shall have to wait and see, but one key aspect of the regime must be to ensure that the black boxes of AI can be opened up when necessary. Here, it is worth noting that not all AI systems are based on machine learning from data. The Social Market Foundation's report reminds us that a technological predecessor, the so-called 'rule-based expert system', is still around. 

Expert systems are a way of capturing human knowledge in the form of 'if-then' decision trees. They lack the glamour and mystery of machine learning, but are perfectly applicable in the - by definition, rule-based - justice system. They are surely the most appropriate technology for building basic dispute-resolution systems and their adoption should raise fewer concerns than those around high-end AI.

Most importantly, unlike machine learning systems, the decision trees of expert systems are set out by human designers and is thus inherently transparent. The forthcoming governance regime should recognise that AI technology does not always require a ghost in the machine.

Topics