The use of personal data has been both revolutionary and evolutionary. The explosion in the availability of personal data came from technological developments such as the creation of the personal computer, social media platforms, and smartphones. The ability to track and record our interactions in communications, the creation of large volumes of publicly available data, and the ability to buy datasets has led to a new technological innovation that many predict will have explosive consequences.

Melissa Stock

Melissa Stock

ChatGPT is a massive technological leap in general purpose (or ‘generative’) artificial intelligence (AI), linking machine learning and natural language. Developed by OpenAI, it is a platform that uses a deep learning algorithm that can generate human-like responses to questions and prompts. ChatGPT-3 was released to the public in November 2022 and has led to a surge in public debate on the benefits and risks of the development of AI. It has also come to the attention of data protection regulators globally.

In March 2023, OpenAI’s ChatGPT service revealed users’ chat histories and credit card information to other users. This prompted Italy’s data protection authority to investigate ChatGPT and temporarily block it. ChatGPT’s service has now resumed in Italy, but data protection regulators in Germany, France and Canada have begun investigations. In the UK, the Information Commissioner’s Office (ICO) has cautioned companies from rushing to adopt AI and is offering an expedited innovation advice service.

The public’s opinion on the mass use of our personal data is difficult to gauge. It is recognised that the large technological companies provide valuable ‘free’ services. The service is of course not free but provided in exchange for our personal data. Some argue that this trade-off is skewed because the financial benefit that technological companies gain from the use of our personal data far outweighs the benefits users of the services receive. In the adoption of AI, there is again this split in public opinion between those that favour the benefits that it can bring, against those that are wary of the potential risks.

How far data protection legislation can be used to protect against AI harms is unclear. The concerns that most people have about the use of AI are exactly those protected by the data protection principles, in particular lawfulness, fairness, transparency, accuracy and security. There are also constraints on processing personal data for profiling and automated decision making in the General Data Protection Regulation.

Playing catch-up

However, data protection from an individual standpoint still has its problems. While data protection gives individuals specific rights over their personal data, it places the onus on the individual to complain or bring infringements to the attention of the supervisory authority. In situations where personal data is being unlawfully processed, it usually follows that there is no transparency. Cambridge Analytica was only caught out because of a whistleblower, and the Clearview

AI facial recognition service scraped people’s images and other personal information from the internet for years before it came to the attention of regulators. Further, where AI is causing collective harm, it could be difficult to bring a class action lawsuit in the UK relying on data protection law.

There is no single law in the UK that specifically regulates AI. The UK wants to become a ‘science and technology superpower’ by 2030 and sees AI development as central to this goal.

The Department for Science, Innovation and Technology released a white paper on the government’s approach to AI regulation in March 2023. Rather than introduce legislation, a non-statutory principles-based framework is being suggested that would be implemented by existing regulators such as the ICO, Ofcom, the Financial Conduct Authority, and the Competition and Markets Authority.

The UK government has stated that it is adopting a ‘light touch’ to AI, whereas the European Union is heading rapidly towards regulation. Any AI legislation will have to complement data protection law. From recent issues that have arisen in generative AI (for example bias, security, creation of deep fakes and scams, and falsifying information), it is clear that the UK’s proposed Data Protection and Digital Information Bill does not contain the safeguards that will become necessary as AI is developed and deployed.

Prime minister Rishi Sunak recently acknowledged that guardrails and regulation are required for AI, implying that the recent white paper may not now represent the intended approach. The prime minister’s press office announced in June that the UK will host the first global summit on the regulation of AI later this year, where perhaps the government’s position will be clarified.

 

Melissa Stock is a data protection barrister at Millennium Chambers