This week, the Gazette is reporting from the International Conference on Artificial Intelligence and Law, taking place at King’s College, London. We make no apologies for covering an academic event. Readers may think that the current excitement about ‘robot lawyers’ is a blue skies novelty, but the biennial conference has been going since 1987. The difference now is that ideas and techniques under discussion are now being deployed in real legal practice. 

The reason is two-fold: a ‘push’ from research breakthroughs in areas such as neural networks and a ‘pull’ from professionals more ready to try out new tools in areas such as contract review. 

The extent to which this readiness is driven by distress or a quest for competitive edge is a matter of debate. What is certain that it is happening. I've been reporting on AI since the early 1980s, when ‘artificial intelligence’ meant rules-based decision-making systems running on green-screen hardware and the Japanese were throwing hundreds of millions of dollars at building a 'fifth generation' of computers (they failed). I can’t remember a more exciting time than now.  

Of course all this suggests we are approaching the peak of a technological hype cycle, with its associated hubris. This is invariably followed by the 'trough of disillusionment' when the bubble bursts and reaction sets in. This column is a plea not to overdo either. Let's not anthropomorphise robots, or panic about them taking over the world. But also let's not to fall into the facile trap of dismissing ‘artificial intelligence’ as an oxymoron. While it's possible that silicon brains will never perfectly replicate human intelligence, for practical purposes there doesn't seem to be any difference between the two. 

Alan Turing nailed it in 1950 with his celebrated thought experiment of an 'imitation game'. If an artificial interlocutor is indistinguishable from a human, we may as well call it a thinking machine. Characterising the question 'Can machines think?' as 'too meaningless to deserve discussion', Turing predicted that within 50 years the arrival of black boxes indistinguishable from human intelligence would change the meaning of the words anyway. Nearly 70 years on, despite the successes of systems such as IBM's Watson, we're not there yet. Quite. But applied to discrete tasks, some of which are found in legal practice, AI is already well up there with its human equivalent. Perhaps a pedantic, dim, workaholic human, but human nonetheless. 

For some reason, this bothers us. We rail against AI's apparent confirmation that intelligence is nothing more than an emergent property of our neural hardware, which, like the peacock's tail, is the runaway consequence of a natural algorithm spotted by Charles Darwin. And this makes many people even more uncomfortable. Some have religious scruples, others are alarmed by what they imagine are the consequences of endorsing 'social Darwinism'. Others just find it suspicious that the greatest single breakthrough in scientific thought since the Renaissance should have come from the bourgeois mind of a white English country gentleman and assume that, like other Victorian heroes, Darwin will receive his defenestration. 

Tough. Over the decades since artificial intelligence (at least in its 'weak' form) emerged as a possibility, we have seen numerous attempts to prove the existence of a Cartesian ghost in the machine. So far as I know, none stands up. One was mathematician Roger Penrose's attempt to prove, with the help of Gödel's incompleteness theorem (the mathematical proof that any consistent system of arithmetic must contain axioms that are impossible to prove) that the brain is no mere calculating machine and thus no silicon imitation could match it. Penrose's theory fails because it is predicated on the suggestion that the brain evolved to do sums - and in any case his ideas about where the ghost resides rely on excursions into unknown quantum theory. 

Lawyers fretting about whether a robot can 'understand' law may be more interested in another famous attack, philosopher John Searle's 'Chinese room argument'. This is the celebrated thought experiment in which a monoglot anglophone is locked in a room equipped with intricately detailed step by step instructions - an algorithm - for drawing symbols in response to symbols drawn on cards slipped under the door. Of course the cards slipped under the door are Chinese characters; as will be the responses, assuming the algorithm is accurate and has been obeyed. And yet the person in the room does not understand a word of Chinese!

The Chinese room argument has spawned a million debates about the difference between 'strong' and 'weak' AI. For our purposes, all we need accept is that the system as a whole understands Chinese. As Darwinist philosopher Daniel Dennett has pointed out in a celebrated feud, we would readily concede that point if the Chinese room interacted in real time.

Searle's apparent paradox arises from the trick of breaking down of the process into a gigantic tree of sub-routines which our everyday experience can cope with. No doubt any 'lawtech' software trained to react to the phrase 'Without prejudice' must somewhere have in it algorithms for spotting the first downward five-o'clock stroke of the initial W. But nowadays we can take for granted that character-recognition software will spot the phrase and, equipped with machine-learning algorithms 'know' enough to shuffle a document into the decision tree dealing with 'looks like a legal threat'. So far as we're concerned, the system 'understands'. 

Turing, again, was on the right track. We need to stop fretting and welcome our new artificially brained colleagues. After all, they're our relatives.