The issue of artificial intelligence and its interaction with the law is undoubtedly a ‘hot topic’. A particularly worrying development for family lawyers is the use of ‘deepfakes’ within proceedings to manipulate evidence for the benefit of one party.

Kitty Broger-Bareham

Kitty Broger-Bareham

Kieran Ball

Kieran Ball

You may be aware of the increasing use of apps to digitally doctor text messages, either to remove messages which do not assist a client’s case (see for example IK (A Child), Re (Hague Convention: Evidence Consent) [2022] EWHC 396 (Fam)) or to insert and create messages that were never sent by the other party in the first place. The same goes for the editing of bank statements and other documents within financial remedies proceedings to present a different financial picture to that actually enjoyed by one party (see X v Y [2022] EWFC 95).

There is, however, an even more troubling phenomenon now developing – the manipulation of audio and video footage using AI. The limited knowledge and understanding of this area can add to the risk that these pieces of evidence are taken at face value without exploration or even consideration as to whether the audio/video clips exhibited have been edited to present a false narrative to the court.  

Deepfakes and sensitive material

As the emerging practice of parties attempting to deploy doctored evidence within family proceedings demonstrates, once certain techniques for using AI are developed it is not long before their use disseminates into the family court’s arena.

An eye to how government may be legislating to combat the misuse of AI, and how the misuse of technology is experienced in society more generally, can provide a good indication of the type of issues that family practitioners may be confronted with.

Deepfake

In November last year, the government announced its intention to implement recommendations made by the Law Commission to introduce legislation targeted at preventing the misuse of deepfake technology and protect victims of intimate image abuse. That legislation will make the sharing of pornographic deepfakes and the installation of hidden equipment to take intimate images a criminal offence.

The fact that legislation is felt necessary indicates a practice on the rise. As demonstrated in Re-M [2022] EWHC 986 (Fam), the family court has had to move swiftly to provide guidance on how to control the use of intimate images in cases when being asked to determine contested allegations.

The family court is acutely alive as to how perpetrators of abuse may seek to utilise proceedings as a moment to inflict further harm. That AI can be used to manipulate or engineer an image presents a further risk that must now be weighed in the balance.

At the very least, AI can be used to produce images that can be misused, in turn demanding explanations which invariably cause upset, distress and alarm. At its highest, this technology can be deployed in attempts to cast doubt or dispel allegations made entirely properly. On either scenario, establishing the veracity of an image is essential.

The upsurge of parties seeking to rely on covert recordings is something with which we are all now familiar. The use of spyware may present the next frontier. This technology, once installed, can track movement, mirror devices and copy data. How the family court grapples with the challenges presented by evidence obtained in this way is a question that remains unanswered.

As it develops and AI becomes more sophisticated, the issue of deepfakes is not something which can or should be overlooked. So what can we, as lawyers, do to combat deepfakes in our cases?

Tips on how to manage possible deepfakes and edited evidence

When you are presented with evidence from another party, such as text messages, audio, or video files, and your client queries their accuracy or validity, there are a few initial steps which can be taken to review the evidence presented.

  • Ask for the original electronic files relied on, rather than hard-copy printouts, scans, or screenshots.
  • Once you have the original electronic files, you can review them carefully and look at the metadata attached which will reveal the dates the files were created, authors, timestamps when they have been edited and so on.
  • From there, you will quickly be able to spot any immediate red flags as to the validity or editing of the clips which support your client’s concerns. This in turn can be raised within proceedings to be explored further.
  • Further down the line, you may also seek to instruct a forensic expert to fully analyse and report on the evidence of which there is a concern.  

Deepfakes and the editing of evidence to benefit one party is something likely to be further considered and scrutinised by the courts as awareness increases. It may be that parties will eventually be required to certify the authenticity of evidence they seek to rely on or to state the source of that evidence. For now, however, lawyers will need to think critically about the evidence they are presented with to ensure that, if concerns are raised by clients, initial steps can be taken to explore those concerns expeditiously.

 

Kitty Broger-Bareham and Kieran Ball are barristers at 4PB, London