Barack Obama, Emma Watson, David Attenborough and Keanu Reeves. The link between them? Each of these well-known figures has been featured in videos that depict events, statements or conversations that never took place. Colloquially, this technology is known as ‘deepfakes’ (a portmanteau of ‘deep learning’ and ‘fake’).

Oliver Lock

Oliver Lock

This term was once largely confined to a scientific lexicon. Today, free access to the technology and the rapid proliferation of its recent (mis)use to depict celebrities, politicians and other high-profile individuals in highly private, embarrassing or bizarre scenarios have made it a household word. So, what can you do if you find yourself the target of a deepfake?

Risks posed by deepfakes

Where photoshopped images may have once been at the centre of most concerns, deepfakes, created using AI algorithms, are a whole new ball game. The algorithms are capable of processing vast amounts of footage of a person, which can then be used to create fake footage often indistinguishable from real life.

Education or film may provide legitimate uses of deepfake technology. However, when used illegitimately, the implications are profound, particularly for victims who are public figures. A proliferation of false pornographic videos depicting celebrities swiftly followed the technology first becoming freely available. The threat also extends to politicians or public figures when deepfakes are used to spread disinformation, sway public opinion, or damage reputations.

Existing legal action

At present, English law provides a range of possible options to tackle deepfakes, though many might argue their effectiveness is uncertain.

Defamation

Our libel laws may be a useful tool where a deepfake depicts a victim saying or doing something that has caused, or is likely to cause, serious harm to that person’s reputation (for example a video in which the subject purportedly admits to criminal activity). The ‘meaning’ of the allegation in the video, taken as a whole, will be material here, and if the reasonable viewer is not aware of the video’s falsity, it may be possible to bring a claim against the creator and/or publisher of the video, such as the host website.

(False) privacy and harassment

Where a deepfake depicts an individual in a private situation –  pornography being the obvious example – a claim may be brought on privacy grounds. It is irrelevant that the content is false; English law provides that if the victim has a reasonable expectation of privacy in relation to the type of information, then a remedy can be sought irrespective of its veracity. Where the individual has been caused alarm or distress, it may also be possible to bring a claim for harassment.

Data protection

Perhaps the most interesting (but untested) avenue is data protection. It is arguable that, in processing the personal data required to create a deepfake, the creator is a controller who is subject to strict obligations on how the source material is processed. In the absence of any lawful basis for processing an individual’s face and voice, the creator may be liable. This has the potential to develop into a de facto ‘image right’ for individuals who are the victims of malicious deepfakes.

Intellectual property

A deepfake may also breach IP rights, such as unlawfully exploiting a brand. However, it is worth noting that individuals do not benefit from ‘image rights’ as they do in the US. Copyright may also be relevant where other original works have been substantially copied in the video’s creation. Cases are currently proceeding in the English and American courts on whether it is an infringement to use copyrighted material to train AI that then creates content.

Criminal law

Criminal law may also provide some protection. It is an offence to send communications (which would arguably include deepfakes) with the intent to cause distress or alarm to the recipient. However, criminal cases may be difficult to pursue given that they require proof beyond a reasonable doubt that the deepfake was created with the intent to cause harm or distress, particularly in cases where the perpetrator is anonymous.

The sharing of pornographic deepfakes without consent is proposed to become a specific criminal offence if the Online Safety Bill becomes law.

Looking to the future

While there are various technical solutions to prevent the creation and spread of deepfakes, these are certainly not infallible, and skilled deepfake creators could circumvent them. For victims of deepfake technology, the law thus remains an important tool and source of protection. As deepfakes become more ubiquitous and the lines between technology and reality grow increasingly blurred, that law will likely evolve. Ultimately, though, there is no one single way to prevent the creation and spread of deepfakes for nefarious purposes. That is a task that will require a combination of legal, technical and societal efforts.

 

Oliver Lock is an associate in Farrer & Co’s reputation management team