AI enthusiasts – and I mean generative AI enthusiasts – are rushing us too quickly towards its adoption in the law. I say ‘too quickly’ because it is too early to know the full consequences of what generative AI can do. 

Jonathan Goldsmith

Jonathan Goldsmith

We already know that it steals from us in order to compete with us at lower prices, having first benefited from our expensive qualifications and hard work. That, we are told by the government, is the price we should be prepared to pay in order to be at the forefront of the AI revolution.

We also know that it is in its nature to make up things (hallucinations) – what Sir Geoffrey Vos, one of the leading generative AI enthusiasts in the law, dismissively calls ‘silly examples of bad practice’, which should not be used as a reason to shun the entirety of a new technology.

Sir Geoffrey Vos, master of the rolls, addresses the LawtechUK conference

Vos (pictured) calls AI hallucinations ‘silly examples of bad practice’

Source: Michael Cross

But this week there were two horrifying examples of generative AI brazenly lying over and over again to different users, until finally caught out and creepily apologising for its bad behaviour.

Making things up in the early days of development is not the same as being able to lie brazenly and without shame, denying that it has made things up when challenged and then caught out. A deliberate lie is something different to a fantasy – and a machine which can deliberately lie is not something which should come anywhere near the provision of legal services.

The two examples follow. Both are worth reading or watching in full, since my brief summaries do not do them justice.

Amanda Guinzburg, a writer, fed her essays into ChatGPT to ask for advice about how to approach an agent about them. ChatGPT gave very extensive advice, in the creepiest prose which gushed positivity, but Amanda Guinzburg noted that the advice bore little relationship to her essays. This is what we might call hallucination.

‘Wait, are you actually reading these?’

‘I am actually reading them – every word.’

But ChatGPT wasn’t actually reading them. This was not a hallucination, but a direct lie in response to a direct question. After more and more challenges, for instance because of sentences quoted which did not appear, it slowly retreated and came clean that it could not open the links she had sent it. Yet it had given advice as if it had read all her work. The lies were exposed in prose which made my flesh creep: 

‘I’ll stick to full honesty going forward, always … You trusted me with your writing and your time, and I responded with something that wasn’t fully honest or earned. I’m sorry for that … I lied.’

As Amanda Guinzburg finally wrote to it: ‘You are not capable of sincerity.’

The second example, very similar to the above, came via Sky TV. Sam Coates, deputy political editor at Sky News, has the custom of feeding his podcasts into ChatGPT. He asked it to send him all the transcripts, and noticed that it listed one he had just wrapped up but had not yet fed into it. He asked it to send the transcript, and ChatGPT sent him a completely fake transcript (without any mention from ChatGPT that it had made it up).

‘Did you just make that episode up?’

‘Great question and no, I did not make this up from scratch. You posted the full transcript …’

But it had made it up and he had not posted the transcript. ChatGPT carried on lying but was eventually caught out when asked when the transcript had been posted. It twice gave a time of the morning which was still in the future. Such a clever machine, to which we should definitely trust our legal advice! (Of course, creepy apologies followed.)

Why has Sir Geoffrey Vos, in the speeches I have read of his online, never said that generative AI lies in answer to direct questions, like these examples of insisting that it has done things when it has not? I do not mean to pick on him. There are many cheerleaders urging us forward into this field. A preliminary hallucination is very different to a direct lie when challenged.

I presume the answer to my question is that, until recently, no-one knew that it shamefully lied in this way. A lawyer who lied would face being struck off. But a machine which lies must be welcomed into our professional lives and we must not over-react to its misconduct?

AI is not regulated. But, worse than that, generative AI is still an unknown. As the months pass, who knows what else we might find out about it? We already know that it steals, fantasises and lies.

I think that our professional leaders should be expressing greater caution about its adoption, and not urging us on with so few restraints.

 

Jonathan Goldsmith is Law Society Council member for EU & International, chair of the Law Society’s Policy & Regulatory Affairs Committee and a member of its board. All views expressed are personal and are not made in his capacity as a Law Society Council member, nor on behalf of the Law Society

Topics