As consultations go, it was significant. In our increasingly data-driven world where social media giants, insurance companies and governments are harvesting and processing personal data on an ever-greater scale, ‘a new direction’ was an opportunity to ensure robust, rights-protecting, accountability-enhancing data laws for the UK.
Instead of capitalising on this opportunity, the consultation paper contained a number of worrying and regressive proposals. The government proposed to: remove the protections against solely automated decision-making offered by Article 22 of the UK GDPR; remove the requirement to undertake Data Protection Impact Assessments; restrict the rights of people to access their own data; and reduce the independence of the Information Commissioner’s Office. You can read Public Law Project’s (PLP) consultation response here.
Now, the government is doubling down on many of these proposals, despite evidence that they could lead to unfair processing of personal data and of the disproportionate effect this may have on already marginalised communities. Further, one of the few more positive proposals – to introduce compulsory transparency reporting in relation to government use of automated decision-making tools – has been dropped.
Notably, the government’s consultation response comes after the Data Reform Bill was announced in the Queen’s Speech as a 'pro-growth' bill aimed at 'reducing the burdens' on UK businesses, with no mention of improving transparency or accountability. PLP signed an open letter to the Department for Digital, Culture, Media and Sport, highlighting our concern about the failure to adequately engage with civil society organisations before announcing the bill.
Now that the consultation response has been published, it is clearer than ever that the government is taking our data protection regime in the wrong direction.
In what follows, we outline some of our main concerns.
Removing the requirement to undertake a Data Protection Impact Assessment
The consultation response notes that '[t]he majority of respondents agreed that data protection impact assessments requirements are helpful in identifying and mitigating risk, and disagreed with the proposal to remove the requirement' to do them. PLP was one of these majority voices. We highlighted the importance of these impact assessments for ensuring that organisations do not deploy – and individuals are not subjected to – systems that may lead to unlawful or discriminatory outcomes.
Despite acknowledging these concerns, the government plans to remove this crucial safeguard.
Restricting the rights of people to access their own data
The consultation proposed introducing a fee for subject access requests. PLP strongly opposed this proposal and, thankfully, the government has said that it does not intend to pursue it.
However, the government still intends to make it more difficult for people to access their own data by changing the current threshold for refusing or charging a reasonable fee for a subject access request from ‘manifestly unfounded or excessive’ to ‘vexatious or excessive’. This, the government states, 'will bring [subject access requests] in line with the Freedom of Information regime'. This reasoning ignores the unique position of data subjects. Anyone can make a request under the Freedom of Information regime. But subject access requests are different. They are requests people make in relation to their own personal data. The Freedom of Information regime in this context is a false and meaningless comparison, and the effect will be to restrict individuals’ data rights.
Reducing protection against solely automated decision-making
The consultation response acknowledges that '[t]he vast majority of respondents opposed the proposal to remove Article 22' and that respondents noted that 'the right to human review of an automated decision was a key safeguard.'
Although the government does not now intend to remove Article 22, it may be significantly watered down. The plan now is to 'cast Article 22 as a right to specific safeguards, rather than as a general prohibition on solely automated decision-making' and 'enable the deployment of AI-powered automated decision-making, providing scope for innovation with appropriate safeguards in place.' We can expect more detail in the forthcoming AI white paper, but it seems very likely that the Article 22 safeguard will be weakened rather than strengthened.
No compulsory transparency
Disappointingly, the ‘new direction’ will not place the Algorithmic Transparency Standard, or similar, on a statutory footing in the near future.
The Algorithmic Transparency Standard is currently run by the Cabinet Office as a pilot. Public sector organisations are encouraged to provide information about their algorithmic tools but, crucially, they are not obliged to.
In Public Law Project’s experience, it can be very difficult to obtain information about state use of automated decision-making tools, despite their increasingly widespread use in areas like immigration and welfare. We do not consider that an optional transparency standard is adequate.
In our consultation response, we made clear our support for compulsory transparency reporting, and emphasised its importance as a first step on the road to accountable and trustworthy deployment of new technologies. Without transparency, there can be no evaluation. And without proper evaluation, we cannot know if systems work reliably, lawfully, or fairly. Transparency is not a utopian pipedream. Other jurisdictions, such as New York City, Canada, and France already have compulsory reporting requirements. The alternative is that the state could use automation to make life-changing decisions affecting people, and that it would be lawful to do so in secret.
Ariane Adam is legal director and Tatiana Kazim is associate research fellow at Public Law Project