When it comes to artificial intelligence (AI) regulation, the UK government is talking tough whilst at the same time weakening protections. Its recent AI White Paper heralded a ‘world-leading approach’ to regulating AI and championed key principles of transparency, fairness and accountability. But this is in tension with the government’s attempt via the Data Protection and Digital Information Bill to water down the obligations contained in Article 22 of the General Data Protection Regime (GDPR). Article 22 is the provision which prohibits decision-making that is made solely through automation and is a key part of our regulatory framework. At a time when Europe and the US are increasing oversight of automated decision-making via the EU’s AI Act and state-level regulation, the UK is instead rolling back its protections.

Alexandra Sinclair, Research Fellow, Public Law Project

Alexandra Sinclair

Source: Public Law Project

Automated decision-making is where a computer is used in part to make a decision, often involving a person. In many contexts this can be unproblematic. For example, the system which charges vehicles that pass through the London emissions zone is entirely automated. But where automated systems, are used to make important decisions about peoples’ rights and interests, the stakes are higher and the risks greater. For example, the government uses an automated system to assist in determining whether a marriage should be investigated as a likely sham. Automation is also used to trigger fraud investigations into disabled people who claim benefits in the UK. When they go wrong, these systems can do great harm. Without the right legal framework, they can be opaque, unfair and unaccountable. One of the provisions in UK law that helps to protect individuals from unfair automated decision-making is Article 22 of the GDPR.

Article 22 as it currently stands in UK law gives a person the right ‘not to be subject to a decision based solely on automated processing’ which produces ‘legal’ or ‘similarly significant’ effects on that person. A ‘solely automated decision’ is a decision taken without meaningful human involvement. Solely automated decisions can only be made currently under UK law where: there is express consent; it is required under contract; or where it is required by law. The proposed amendments to Article 22 in the current data bill reverse the presumption against solely automated decision-making. The effect of the bill is that solely automated decisions will be more often permissible. The prohibition will only apply where a decision uses special category data. This is highly personal data such as health data or information that may relate to a person’s protected characteristics. This restriction is not a sufficient safeguard.

Automated systems can have seriously negative effects on individuals including where special category data is not used. SyRI, the welfare benefit fraud detection system in the Netherlands, used ostensibly innocuous pieces of data like an individual’s annual water usage but datasets were combined in ways that had terrible consequences for members of the public who were wrongly accused of benefit fraud. Similarly, the abandoned A-level algorithm used to decide the grades in the pandemic did not use special category personal data. The algorithm nevertheless had a huge impact on students across the country, some of whom initially missed out on university admission. Whether or not a system uses special category data should not be the decisive factor in whether a solely automated decision can be made.

Article 22’s role is to help ensure that decisions made by automated systems have ‘meaningful human involvement’ to protect against errors that can be made by solely automated systems without oversight. However, ‘automation bias’ means many decisions, ostensibly made with human involvement, are in effect solely automated because humans disproportionately trust automated systems’ outputs rather than meaningfully reviewing them. In order to help prevent this the ICO has noted that human reviewers’ involvement ‘should have actual "meaningful" influence on the decision, including the "authority and competence" to go against the recommendation’ and should not just ‘rubber stamp’ the outputs of the automated system.

The bill as proposed allows the secretary of state by regulation to deem decisions as having meaningful human involvement. The secretary of state could therefore provide in regulations that a decision that was not actively reviewed by a human decision-maker constituted meaningful human involvement. The government’s data consultation response acknowledges that, for respondents, 'the right to human review of an automated decision was a key safeguard’. This bill could potentially undermine that safeguard.

The draft bill will also give the minister the power by statutory instrument to deem a decision as not having a ‘similarly significant effect’. A minister could deem decisions that use algorithms to exclude people from benefits or that grade exam papers as decisions which do not have significant effects. The bill does list safeguards relating to solely automated decision-making. However, the bill also gives a power to ministers via statutory instrument to amend or remove them. It is not clear why such a power is needed. If the government intends to introduce or remove safeguards they should be placed on the face of the bill for scrutiny. The power as drafted gives the potential for ministers to remove safeguards with virtually no parliamentary oversight.

A prohibition against solely automated decisions, is not in itself a panacea for the risks of automated decision-making. But instead of having a conversation about how to increase meaningful scrutiny over automated systems the proposed data bill seeks to water down the insufficient protections we currently have. What is ‘world-leading’ in that approach remains to be seen.

 

Alexandra Sinclair is a research fellow at Public Law Project

Topics