Artificial intelligence systems will have to identify a legal person to be held responsible for any problems under proposals for regulating AI unveiled by the government today. 

The proposed 'pro innovation' regime will be operated by existing regulators rather than a dedicated central body along the lines of that being created by the EU, the government said. 

The proposals were published as the Data Protection and Digital Information Bill, which sets out an independent data protection regime, is introduced to parliament. The measure will be debated after the summer recess. 

The core principles of AI regulation proposed today will require developers and users to:

  • Ensure that AI is used safely
  • Ensure that AI is technically secure and functions as designed
  • Make sure that AI is appropriately transparent and explainable
  • Consider fairness
  • Identify a legal person to be responsible for AI
  • Clarify routes to redress or contestability

Regulators - such as Ofcom, the Competition and Markets Authority, the Information Commissioner’s Office, the Financial Conduct Authority and the Medicine and Healthcare Products Regulatory Agency - will be asked to interpret and implement the principles.

They will be encouraged to consider lighter touch options which could include guidance and voluntary measures or creating sandboxes - such as a trial environment where businesses can check the safety and reliability of AI tech before introducing it to market.

Digital minister Damian Collins said: 'It is vital that our rules offer clarity to businesses, confidence to investors and boost public trust. Our flexible approach will help us shape the future of AI and cement our global position as a science and tech superpower.'

 

This article is now closed for comment.