The scourge of AI generating fake cases has moved into intellectual property, with a new judgment revealing another party falling foul of a rogue search result.

In an Intellectural Property Office appeal ruling, litigant in person Dr Mustapha Soufian admitted he had drafted his case over the sign ProHealth with help from the large language model software ChatGPT.

Phillip Johnson, the appointed person overseeing the appeal, pointed out numerous errors in the citations and problems with Soufian's skeleton argument.

Victor Caddy, a trade mark attorney for the respondent, was also criticised over citations which, while genuine, appeared not to support his submissions. When asked during the hearing to clarify which parts of the judgments were relevant, Caddy replied: ‘I cannot actually remember that now.’ According to the judgment, after the hearing Caddy emailed the tribunal to say he had not been expecting to ‘make out my own side’s case more so than had been done in the skeleton’.

Johnson added: ‘Nothing in the email improved Mr Caddy’s position and the quotation above clearly makes it worse.’

The ruling cited the Ayinde case where the divisional court considered the problems arising from advocates using generative artificial intelligence to prepare documents to be used in court.

Johnson acknowledged that Soufian was a litigant in person but said he was still under a duty not to mislead the court. No LiP would be punished for raising irrelevant arguments which were honestly made or based on a genuine misunderstanding, but this was different to relying on laws which have been fabricated.

Johnson said the trade mark registrar should make clear warnings explicitly setting out the risks of using AI for legal research and drafting skeleton arguments.

‘It is important that all litigants before the registrar... and during any appeal to the appointed person are made aware of the risks of using artificial intelligence,’ he added. ‘Many litigants-in-person will know little about trade mark law and think that anything generative artificial intelligence creates will be better than they can produce themselves. So a very clear warning needs to be given to make even the most nervous litigant aware of the risks they are taking.’

 

This article is now closed for comment.