The EU has stepped in to ‘kick-start’ a meaningful discussion on the legislative direction of artificial intelligence.

Futurists have spent decades waiting for the development of artificial intelligence (AI) to catch up with science fiction. However, this decade is seeing us reach a tipping point – the technology is starting to deliver on the theory. Now the concern has started to focus on the law and whether the lack of legal certainty on issues affected by AI will stymie development.  

Keen to encourage AI’s development, both UK and EU legislators have been active in the past six months setting out plans to bring legal certainty to some of the areas challenged by AI – with different rates of progress.

EU and UK reports

On 12 January the Legal Affairs Committee of the EU Commission passed a report that announced the need for EU-wide rules on AI and robots. The report marked an interesting step forward for AI within Europe as it gave some recommendations as to what AI legislation might look like. These included suggestions on issues such as:

  • Personhood: the report noted that a legal status for AI, perhaps akin to that granted to corporates, should be created at some point to help deal with issues of liability and ownership;
  • Agency: the creation of a European agency for robotics and AI;
  • Registration: a system of registration of the most advanced ‘smart autonomous robots’;
  • Code: an advisory code of conduct for robotics engineers aimed at guiding the ethical design, production and use of robots;
  • Insurance: a new mandatory insurance scheme for companies to cover damage caused by their robots; and
  • Driverless vehicles: the report notes that self-driving cars are ‘in most urgent need of European and global rules… Fragmented regulatory approaches would hinder implementation and jeopardise European competitiveness’.

In the UK, the Commons Select Committee for Science and Technology reported in October 2016 on robotics and artificial intelligence.  The committee called for a commission on AI to be established at the Alan Turing Institute to examine the social, ethical and legal implications of recent and potential developments, and also looked to the government to ensure that education and training systems were optimised to better train the future workforce.

Personhood

The EU committee’s focus on the concept of ‘personhood’ is particularly welcome. Law places a strong emphasis on the concept of a ‘person’.  It drives the approach law takes to issues such as ownership and liability. That concept initially attached to the human – people owning things, committing crimes or entering into agreements. But we have seen laws adapt.  

Roman law was sophisticated enough to deal with giving different types of legal status to Roman citizens, within families and to slaves. This legal status had different implications for abilities to own and be responsible. In our modern world we have stretched the concept of legal personality by creating other entities – for example limited companies, plcs, trusts and so on – which are all capable of ownership and liability. A company can enter into contracts, incur debt and be held accountable for its actions and these legal obligations can be distinct from those attached to shareholders, directors, parent or subsidiary companies. The question now is should we make an analogous distinction for AI. Two areas are helping drive this legal debate.

Intellectual property rights

In the UK, the law governing intellectual property ownership and exploitation is predominantly contained in the Copyright, Designs and Patents Act 1988 (CDPA 1988).  Intellectual property laws have dealt with the advent of computers and legal personality; for example in the case of a literary work which is computer-generated, the author (who typically owns the copyright) shall be taken to be the person ‘by whom the arrangements necessary for the creation of the work are undertaken’ (section 9(3) of the CDPA 1988), and the ‘author’ of a work can be either an individual or a ‘a body incorporated under the law of a part of the United Kingdom or of another country’ (section 154(1)(c)). So the law already deals with attaching rights to machine-created content and with non-humans owning copyright. However, the answer the law gives us may not suit the newer cognitive AI where the intelligent function cannot be traced back to a human programmer but is entirely machine-generated.

Perhaps we can take some guidance from recent analogous discussions. In 2014/15 we saw the ‘monkey selfie’ case work its way through US courts. Legal action was brought by various interested parties regarding ownership of a photograph taken by a Celebes crested macaque on a photographer’s equipment and which was published in a number of places, including on Wikipedia. Wikipedia and others argued that the image was uncopyrightable because it was taken by an animal, who could not own copyright, and they were therefore free to distribute the photo. The People for the Ethical Treatment of Animals argued that the photograph taken (a ‘selfie’ of the monkey) was owned by the monkey, that is someone other than a human or a corporation. However, a US federal court confirmed in 2016 that an animal could not own the copyright.

The law already recognises that a non-human entity may be considered an owner/author – should we extend that right to an artificially intelligent machine? We have not yet been able to extend copyright to animals who exhibit high degrees of intelligence, but we have to plcs. Do artificially intelligent machines sit somewhere inbetween these two? Should they?

Liability

AI also challenges how we view responsibility for the actions of technology – be that within the concept of tortious liability or as liability is established under regulation or legislation. To date the law has dealt with technology as typically not providing an intervention to any tortious or statutory duty of care. Users of technology take responsibility for the outputs of the use of that technology – providers of technologies take responsibility for the technology they provide. Where this responsibility lies depends on facts around the use and the damage. The current legal framework gives an answer which is indifferent to the technology being intelligent or otherwise. The issue is not a lack of certainty.

The question AI raises is: should the allocation of responsibility be different because of the AI? Does the technology operate in such a way – with   independent thought and intelligence – that we need to establish responsibility and liability to the machine itself or in a way that relieves someone of a liability they would otherwise have had? For example, employers generally have vicarious liability for their employee’s actions in the course of their employment but the case law is well developed as to where the employer’s liability ends for employee’s actions. Should we look at AI as we look at employees when we use it in the workplace? By raising the issue of ‘personhood’ and the liability issue in detail in their report, the EU is indicating this is a debate we should be having.

AI and the law

It is good to see that there has been a general recognition that the UK was falling behind in creating a legislative framework for AI. The EU report starts to ‘flesh out’ how the legislators might approach the status of AI and the laws for development. This feels like the first time a major legislative body has done so for AI at this level of granularity. Keen ‘techies’ will note that the fundamental principles of Isaac Asimov’s ‘Laws of Robotics’ – first articulated in 1942 – are referred to and form a basis of some of the proposed rules. This reliance on 75-year-old rules is either indicative of the prescient and visionary work of Asimov, or it shows how far we still have to go in our thinking in this area. It is perhaps too soon to tell which.

Andrew Joint is commercial technology partner at Kemp Little