On 2 February EU member states unanimously reached an agreement on the text of the harmonised rules on artificial intelligence, the so-called AI Act. The final draft of the act will be adopted by the European Parliament in a plenary vote in April and will come into force in 2025 with a two-year transition period. 

Ibrahim Hasan

Ibrahim Hasan

The AI Act sets out comprehensive rules for AI applications, including a risk-based system to address potential threats to health and safety, and human rights. The act will ban some AI applications which pose an ‘unacceptable risk’ (for example, real-time and remote biometric identification systems, like facial recognition) and impose strict obligations on others considered as ‘high risk’ (for example, AI in EU-regulated product safety categories such as cars and medical devices). These obligations include adherence to data governance standards, transparency rules and the incorporation of human oversight mechanisms. Despite Brexit, UK businesses and entities engaged in AI-related activities will be affected by the act if they intend to operate within the EU market. The act will have an extra-territorial reach, just like the EU GDPR.

The UK government’s own decisions on how to regulate AI will be influenced by the EU approach. An AI white paper, A pro-innovation approach to AI regulation, was published last March. It sets out the UK’s preference not to place AI regulation on a statutory footing but to make use of ‘regulators’ domain-specific expertise to tailor the implementation of the principles to the specific context in which AI is used’. The AI Act may force the government to think again.

The Data Protection and Digital Information (No.2) Bill is in the committee stage of the House of Lords. It will make changes to the UK GDPR, the Data Protection Act 2018 and Privacy and Electronic Communications (EC Directive) Regulations 2003 (PECR). The current bill is not substantially different to the previous version (see The Data Protection and Digital Information Bill). The information commissioner, while being broadly supportive, has expressed concerns about the provisions on sharing personal data in social security contexts. He argues that the current wording might be too broad or vague, potentially leading to misuse or overreach in the handling of personal data by data controllers. The Open Rights Group, an organisation campaigning on surveillance, privacy and free speech, has claimed that the bill would, among other things, weaken data subjects’ rights, water down accountability requirements, and reduce the independence of the Information Commissioner’s Office. Despite these objections, I expect the bill to be passed in May, in a form similar to the one now published, and to come into force later this year.

In October, a tribunal overturned two GDPR notices issued by the ICO to Clearview AI. Clearview operates an online database containing 20 billion images of people’s faces scraped from the internet. It allows customers to upload an image of a person to its app so that they can be identified.

In May 2022 the ICO issued a monetary penalty notice of £7,552,800 to Clearview for breaches of the GDPR, including failing to use the information of people in the UK in a way that is fair and transparent. Although Clearview is a US company, the ICO ruled that the UK GDPR applied because of Article 3(2)(b) (territorial scope). It concluded that Clearview’s processing activities ‘are related to… the monitoring of [UK residents’] behaviour as far as their behaviour takes place within the United Kingdom’.

On appeal, the First-tier Tribunal (Information Rights) (in Clearview AI Inc v The Information Commissioner [2023] UKFTT 00819 (GRC)) concluded that although Clearview did carry out data processing related to monitoring the behaviour of people in the UK, the ICO did not have jurisdiction to take enforcement action or issue a fine. Both the GDPR and UK GDPR provide that acts of foreign governments fall outside their scope; it is not for one government to seek to bind or control the activities of another sovereign state. However, the tribunal noted that the ICO could have taken action under the Law Enforcement Directive (Part 3 of the DPA 2018 in the UK), which specifically regulates the processing of personal data in relation to law enforcement. This is a significant ruling which has implications for the extra-territorial effect of the UK GDPR and the ICO powers to enforce it. The ICO is appealing to the Upper Tribunal.

In December, the ICO fined the Ministry of Defence (MoD) £350,000 for disclosing personal information of people seeking relocation to the UK shortly after the Taliban took control of Afghanistan in 2021. On 20 September 2021, the MoD sent an email to a distribution list of Afghan nationals eligible for evacuation using the ‘To’ field. The email addresses could be seen by all recipients, with 55 people having thumbnail pictures on their email profiles. Two people ‘replied all’ to the entire list of recipients, with one of them providing their location. The personal data disclosed, should it have fallen into the hands of the Taliban, could have resulted in a threat to life. This fine shows that despite the ICO’s revised approach to public sector enforcement (announced in June 2022), the commissioner will still use his fining powers in the most serious cases.

Jon Edwards, the information commissioner, said: ‘By issuing this fine and sharing the lessons from this breach, I want to make clear to all organisations that there is no substitute for being prepared. As we have seen here, the consequences of data breaches could be life-threatening. My office will continue to act where we find poor compliance with the law that puts people at risk of harm.’

 

Ibrahim Hasan is a lawyer and director of Act Now Training