New developments in artificial intelligence do not yet need specific new laws to control possible harmful effects, a landmark inquiry by peers recommends today. However the House of Lords Select Committee on Artificial Intelligence's 180-page report proposes that the government draft an international ethical code - which would include a ban on autonomous weapons, so-called 'killer robots'. 

In researching the report, the lords' investigation took evidence from a wide range of ethical and legal experts, including the Law Society, law firms and Gazette columnist Joanna Goodman, as well as figures from industry and academia. Its overall finding was that the UK is in a strong position to lead developments, with its 'constellation of legal, ethical, financial and linguistic strengths'. However committee chair Lord Clement-Jones (DLA Piper partner Timothy Clement-Jones) noted that: 'AI is not without its risks and the adoption of the principles proposed by the committee will help to mitigate these.'

The committee heard widely varying views on whether AI requires urgent regulation, for example to prevent important decisions being taken by 'black box' algorithms. For example the Foundation for Responsible Robotics told the committee 'we need to act now to prevent the perpetuation of injustice'.

However the Law Society's written evidence argued that 'AI is still relatively in its infancy and it would be advisable to wait for its growth and development to better understand its forms, the possible consequences of its use, and whether there are any genuine regulatory gaps'. Technology specialist firm Kemp Little also warned against premature regulation, saying that 'the pace of change in technology means that overly prescriptive or specific legislation struggles to keep pace and can almost be out of date by time it is enacted'.

The committee's report agrees, concluding that: 'Blanket AI-specific regulation, at this stage, would be inappropriate. We believe that existing sector-specific regulators are best placed to consider the impact on their sectors of any subsequent regulation which may be needed.' It notes that the General Data Protection Regulation and the Data Protection Bill currently going through parliament address many concerns, such as the right of people to challenge decisions made about them by artificial intelligence. 

However, the committee recommends that the government convene a global summit by the end of next year to develop a common framework for the ethical development and deployment of artificial intelligence systems. As a 'starting point' the committee recommends the framework adopt five principles:  

  • Artificial intelligence should be developed for the common good and benefit of humanity
  • Artificial intelligence should operate on principles of intelligibility and fairness
  • Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities
  • All citizens should have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence
  • The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence

Such a cross-sector AI code could be adopted nationally, and internationally, the committee proposes. 'In time, the AI code could provide the basis for statutory regulation, if and when this is determined to be necessary.'