In June the government closed its consultation on the AI white paper, containing its vision for UK regulation of AI. It is taking a bottom-up and wait-and-see approach. Enforcement will be left to regulators, who will have to adapt a set of non-statutory principles to existing powers; the government does not intend to pass AI-specific legislation or set up an AI-specific regulator. It has been apparent for some time that the government favours a light-touch approach to AI governance.

Beatriz San Martin

Beatriz San Martin

The government’s policy paper Establishing a pro-innovation approach to regulating AI highlighted a set of cross-sector principles based upon which regulators should address AI risks through sector-specific measures. These principles have been fleshed-out in the white paper but critics question whether this is the correct approach, especially as it differs considerably from the EU’s AI Act.

The core proposal is that sector-specific regulators will define AI in a way that is suited to their sector, then use existing powers to regulate AI suppliers, deployers and users under the supervision of central government. This model would include the following key features:

Definition of AI: instead of introducing a singular definition of AI, the government proposes to define AI through the two characteristics it says demand a bespoke regulatory response: (i) adaptivity, as AI systems are trained to infer data patterns that humans cannot discern easily and to draw inferences beyond their training; and (ii) autonomy, as AI systems can make decisions without the express intent or ongoing control of a human.

The government says its framework will be future-proofed against unanticipated new technologies.

The 5 Principles: a range of legislation and regulation already applies to AI, including data protection law, equality law, consumer rights law, intellectual property law and medical devices regulation, but there is no coherent framework to hold it all together. The government’s central proposal is to introduce a set of over-arching principles that regulators must ‘interpret and apply to AI within their remits’. The principles are:

1.    Safety, security and robustness – risks should be continually identified, assessed and managed.

2.    Appropriate transparency and explainability – ‘Transparency’ refers to the communication of appropriate information about an AI system to relevant people. ‘Explainability’ refers to the extent to which it is possible for relevant parties to access, interpret and understand the AI’s decision-making processes.

3.    Fairness – AI systems should not undermine legal rights, discriminate unfairly against individuals or create unfair market outcomes.

4.    Accountability and governance – governance measures should effectively oversee supply and use of AI, with clear lines of accountability established across the AI life cycle.

5.    Contestability and redress – where appropriate, parties should be able to contest an AI decision or outcome that is harmful or creates a material risk of harm.

Regulators may be over-burdened, and there could be jurisdictional issues too. For the government’s approach to work, the AI application has to fit within the jurisdiction of the regulator, which must have the right enforcement powers. Enforcement powers vary among regulators, and little suggests that the government has considered issues carefully to ensure that certain AI-enabled applications and technologies do not fall within regulatory gaps.  

With no single set of AI rules, businesses supplying AI-enabled products and services to UK customers across sectors will need to determine which rules apply. The picture is even more complicated for supply to both the UK and EU markets, as the EU AI Act would impose its own compliance requirements.

Although regulators will apply the principles to their sectors, central government will have oversight through several support functions, including:

  • monitoring and evaluating the overall effectiveness and implementation of the principles;
  • assessing and monitoring risks across the economy arising from AI; and
  • conducting horizon scanning and gap analysis.

It is not clear where in government these functions will reside, though the white paper states there will not be a new AI regulator.

There has been speculation as to whether the white paper is now behind the curve. Increasingly, experts assert that a firmer approach is required. The UK government has itself recently acknowledged AI as a chronic risk in its 2023 register of the most serious threats to national security.

Based on the consultation, the government intends to issue the final cross-sectoral principles to regulators plus initial guidance on their implementation.

The government will then encourage regulators to publish their own guidance for applying the principles in their sectors, and will form partnership arrangements with organisations to deliver the first central support functions.

Longer term, the government aims to deliver a first iteration of the central functions, fill any gaps in the regulatory guidance, and publish a draft central, cross-economy AI-risk register for consultation.

Parliament is also examining the impact of AI. It published an interim report on AI governance in August, noting that the UK can leverage its deep expertise in AI and related disciplines, and its reputation for trustworthy and innovative regulation, to position itself as a go-to destination for the development and deployment of AI. The interim report also contends that ‘a tightly-focused AI bill in the next king’s speech would help, not hinder, the prime minister’s ambition to position the UK as an AI governance leader’, in case other jurisdictions adopt frameworks that ‘may become the default even if they are less effective than what the UK can offer’.

 

Beatriz San Martin is a partner at Arnold & Porter, London. Senior counsel Peter Schildkraut and associate Lewis Pope co-wrote this article