Jacob Turner is a persuasive chap. When I opened Robot Rules, I was instinctively hostile to the idea that the fast-developing field of artificial intelligence (as Turner explains, a flawed term, but one we are stuck with) needed a new body of regulation.

Nearly 400 pages on, my opinions have changed. Not because of any apocalyptic predictions about unregulated robots enslaving humankind or turning the planet into a paperclip factory: it is not that kind of book. Rather, the author carefully and authoritatively makes the case that AI presents novel problems for which current legal systems are inadequately equipped. 

Turner, a barrister (and former solicitor advocate) points out that AI is unlike other technologies, which are essentially fixed once human input has ended. A bicycle will not re-design itself to become faster. A baseball bat will not independently decide to hit a ball or smash a window. But, as we have seen, a machine-learning programme can teach itself to become a better player of chess. And to take autonomous decisions involving human life and death. As Turner explains for the benefit of a lay audience, AI challenges the legal concept of agency. Gazette readers will immediately spot the consequences for product liability: the more advanced AI becomes, the more difficult it will be to hold a human responsible, let alone blameworthy, for its acts. 

More esoteric questions requiring answers include: If humans are augmented by AI, when, if ever, might a human lose their special status? Is it ever wrong to damage or destroy a robot? Can AI be made to follow any moral rules? 

Like most commentators on the topic, Turner acknowledges Isaac Asimov's laws of robotics, drawn up in 1942. Unlike most, however, he points out that the three laws were drafted with the very aim of creating interesting conflicts and paradoxes for science fiction stories. 

So how could we do a better job? We must start, Turner says, not by asking what the laws should be but who should write them (the positivist approach). The starting point could be an international treaty: he holds up as an example the Outer Space Treaty of 1966, which set only broad propositions for subsequent fleshing-out. He takes particular inspiration from the fact that the space treaty was agreed by cold-war rivals at the height of the space race. I am less sure we can take lessons from that: the USA and the Soviet Union had their own reasons for presenting space exploration as a peaceful scientific endeavour while conducting covert research in blatant breach of the treaty's lofty principles. 

Perhaps more realistic is his proposal for an 'International Academy for AI Law and Regulation', to develop and disseminate knowledge and expertise in international AI law. 

In order to write rules for robots, Turner concludes, the challenge is clear. The tools are at our disposal. The question is not whether we can, but whether we will.

We should certainly start with this book. 

Robot Rules: Regulating Artificial Intelligence, by Jacob Turner (Palgrave Macmillan)