Earlier this month, judgment was handed down in the biggest copyright case of the decade. Practitioners hoped that Getty Images v Stability AI would determine whether the training of an AI model and its outputs infringed copyright. Sadly, in a decision that will come as no surprise to those who followed the June trial, these questions remain unanswered.

On the points left for the judge to decide, neither the creative industries nor the AI community can claim to be the clear winner.
At the heart of the case was the use by Stability AI of what Getty Images called its ‘lifeblood’, the photo library established in the 1990s using the Getty family’s oil money. Stability AI trained its AI model, Stable Diffusion, on pictures from this library without Getty Images’ consent. The training took place outside the UK.
Getty presented to the court a series of output images created by various versions of Stable Diffusion which it said could be traced back to photos in the Getty Images library. Damningly, a number of these Stable Diffusion creations reproduced Getty Images watermarks, seemingly proving the Getty claim for copyright, database right and trade mark infringement (as well as passing off).
However, during the trial, Getty Images acknowledged that (i) there was no evidence that the training and development of Stable Diffusion took place in the UK, such that ‘the training and development claim’ was abandoned. Further, Stability blocked the user prompts which generated the allegedly infringing AI outputs, such that Getty also abandoned its copyright infringement claim relating to the AI outputs. Finally, having abandoned the training and development claim and the output claim, Getty did not advance its claim for database right infringement.
This left Getty with its trade mark infringement claim in respect of the watermarks on the AI outputs and a claim that the Stable Diffusion AI model itself was an infringing article, such that Stability AI was liable for secondary copyright infringement on the basis that it had imported into the UK, possessed and dealt with an infringing copy. A key issue in connection with the secondary copyright infringement claim was whether the licensed photos relied on by Getty Images were subject to exclusive licences providing it with concurrent rights with the photographer copyright owners so as to be jointly entitled to a remedy for copyright infringement.
The big win for Getty Images and a key takeaway from the case is that the creators of AI models can be found liable for infringing outputs from their AI tools. This was determined on the basis of the judge’s findings of ‘double identity’ and ‘confusion’ trade mark infringement under sections 10(1) and 10(2) of the Trade Marks Act 1994 (TMA); Getty was unsuccessful in its ‘detriment’ claim under section 10(3) of the TMA and in its passing off claim.
The trade mark infringement findings come as no surprise, given the clear reproduction of Getty Images’ trade-marked names which appeared as watermarks on some of the AI output images. However, as the judge said, these findings were both ‘historic and extremely limited in scope’. They will not deliver Getty Images much in the way of damages when quantum is finally determined.
However, what practitioners can extrapolate from this is that where there is a clear link between an intellectual property right and an AI output, then the AI company will be liable if there is a finding of infringement. Hypothetically, on this basis, it is possible to imagine Taylor Swift bringing a case against an AI song generator in respect of a prompt ‘create me a Taylor Swift song’ where the output lyrics reproduce a substantial part of her back catalogue. Although in such a scenario, what would be equally interesting (and another issue not addressed by the Getty case) is whether such AI output could be defended on the basis that the lyrics are a ‘parody’ or a ‘pastiche’.
Notwithstanding that Getty Images established that it controlled the exploitation of some of the photos in issue on the basis of exclusive licences, its claim of secondary infringement of copyright failed.
Despite the decision going against Getty, the finding that an ‘article’ (for the purposes of the Copyright, Designs and Patents Act 1998) includes intangible objects, such as AI models, will assist practitioners with future arguments.
The problem for Getty was that it struggled to persuade the court that an AI model amounted to an ‘infringing copy’. Getty accepted that Stable Diffusion itself did not comprise a reproduction of any photos, but it argued that the definition of ‘infringing copy’ was sufficiently broad to encompass an article (including an intangible article) whose creation or ‘making’ involved copyright infringement. Getty pointed out that it was common ground that: (i) the training of the AI model involved the reproduction (by means of storage) of the photos; and (ii) the ‘making’ (or optimisation) of the AI model weights requires their repeated exposure to the photo training data. Getty argued that this ‘making’ satisfied the definition of an ‘infringing copy’.
Stability highlighted that its AI model was trained on copyright works in the US. Copies of those works were never present within its AI model and the AI model cannot be an ‘infringing copy’ where the AI model has ‘never had the slightest acquaintance with a UK copyright work’. Stability also highlighted the fact that the act of training the AI model weights ultimately did not involve storing or reproducing the images in those weights.
The judge agreed with Stability AI. She said: ‘Stable Diffusion… does not store or reproduce any copyright works (and has never done so) [and so] is not an “infringing copy”.’
Getty turned oil money into digital media assets. Stability AI turned those media assets into algorithmic data. Data is the new oil and we are no closer to learning if this transformational process amounts to a primary infringement of copyright in UK.
Iain Connor is an intellectual property partner at Michelmores























No comments yet