Unfounded fears over homicidal robots must not be allowed to stifle innovation.
I’m sorry Dave, I’m afraid I can’t do that.’ Fifty years ago, a truculent computer’s refusal to open the pod bay doors confirmed our suspicions about artificial intelligence: a smart machine would turn against its creator. Of course that idea goes back a long way further than Stanley Kubrick’s 2001: A Space Odyssey. But now, with what we (mistakenly) call AI becoming reality, this hoary plotline is beginning to shape public policy, with calls for regulation, moratoria and even outright bans.
As with all such bandwagons, we should board this one with scepticism. The law needs to keep an eye on AI, but in a way that nurtures innovation and protects us from proven hazards. Not one that strangles development because of speculation about breakthroughs that may still be decades away.
An example of the right approach may be the Law Commission’s study on the legal framework for automated vehicles. Its considered pace seems appropriate for a technology years from being let loose on our streets.
There may also be a need to regulate ‘black box’ algorithms making life-changing decisions. This has become a political hot potato following alarmist and inaccurate reports of computers replacing custody sergeants in deciding whether to grant bail. The House of Commons Science and Technology Committee will shortly report on the issue of automated decision-making. It will likely recommend some sort of licensing regime – OffBot? – and that the codes running such systems be open to inspection.
Worthy suggestions, but we need to be wary of subjecting AI to a digital equivalent of the Red Flag Act. Regulation of AI is best carried out by existing mechanisms with expertise in their own sectors, rather than a new body with jurisdiction over everything from Facebook to fridges. Anyway, we already have plenty of existing legislation. Article 22 of the General Data Protection Regulation has protections from automated decisions; these are fleshed out in the Data Protection Bill going through parliament.
As ever with new technologies, some would take regulation much further. Their fears range from job-losses to what futurologists call ‘the singularity’ – the point at which AI overtakes humanity and decides we are surplus to requirements.
Alarmism about AI is all the rage. The Malicious Use of Artificial Intelligence by Cambridge University’s Centre for the Study of Existential Risk features robot assassins and Minority Report-style predictive arrests.
But that report has its feet firmly on the ground compared with a statement this month by the European Commission’s group on ethics in science and new technologies. It essentially proposes a freeze on AI development until we put in place a ‘common, internationally recognised ethical and legal framework’.
According to the group, ‘autonomous systems’ should be developed and used only in ways that serve ‘the global social and environmental good, as determined by outcomes of deliberative democratic processes’.
The statement is silent over how a deliberative democratic consensus can be achieved. But even more jaw-dropping in 2018 is the implication that the ‘distribution of benefits and equal opportunities’ be organised by a governing elite rather than what is dismissed as ‘radical individualism’. Actually, radical individualism has a far better record of spreading the benefits of innovation than central committees, even democratically chosen ones. Some readers will be old enough to remember the waiting lists for GPO telephones.
But wouldn’t queueing for consumer goodies be a price worth paying to escape an AI apocalypse? Only if you believe this a realistic threat. For a demolition of current alarmism, see Professor Steven Pinker’s book Enlightenment Now, which points out that ‘general’ AI is as far away as ever. Even if such a machine existed and turned bad, ‘without the cooperation of humans it would remain an impotent brain in a vat’.
Indeed. Fans of 2001 will recall the ultimate humiliation of the computer baddie: being dismantled by a human with a screwdriver. Perhaps the only regulation we need for AI is one mandating an off-button.
Michael Cross is news editor at the Gazette