We cannot say what caused a barrister to present a statements of facts in this case with case citations which were not real.
Not only did the five cases flagged up prove to be fake, but the issue appeared to be brushed off by the lawyers involved when the truth was revealed. These were not, as the barrister claimed, ‘minor citation errors’ nor were they ‘cosmetic errors’ as argued by the solicitors. One can only imagine how tense it must have been in court as these inadequate explanations were offered for what were catastrophic mistakes.
Mr Justice Ritchie himself appeared bemused why anyone would submit a fake case, particularly where the grounds in the underlying claim were potentially good. The opposing defendant offered one possible explanation: large-language model artificial intelligence software. The implication – untested and not subject to any finding by the judge – was that there had been a search through a generative platform such as ChatGPT to find cases that supported the arguments being made. The searches spewed out wholly fictional cases that were not checked.
Whether that happened or not, the case provides a stark reminder that generative AI is fraught with danger and should be used sparingly, if at all, for putting together submissions. There are many examples where lawyers in other jurisdictions - and particularly litigants in person - have consulted generative AI programs and come a cropper. It must be assumed that some fake cases have been cited and not spotted by judges who took them on face value.
Read more
It is easy to condemn such practices (and clearly they should be condemned) but perhaps harder to acknowledge that the wider profession is partly responsible for young lawyers using AI without properly considering the consequences. Both the Law Society and Bar Council have published updated guidance, but are these enough?
Let's put ourseves in the position of someone who qualified during lockdown or the months subsequently. How many opportunities might they have had to build relationships to the point where they would feel comfortable asking about case law with a more experienced colleague? What in-person supervision did they receive during their training and how could they have harnessed that ability to knock on a door and ask a question?
As barrister Darren Lewis pointed out on Linkedin, the most effective and speedy way that relevant case law was previously identified was in conversation with more senior colleagues in chambers or law firms. How many of those chambers or firms are now lacking those who might provide young lawyers with guidance, as the experienced older heads take up the benefits of flexible working?
As Lewis says: ‘I’m in chambers far less than previously due to ease of parenting and working at home. It’s an uncomfortable truth but I think barristers and solicitors who are 10 years+ need to recognise there are dangers generated by our absence.’
Even those with the most experience in the law need help getting their heads round this new technology and its strengths and limitations. Judges now have access to large-language model AI software on their personal computers, but this is accompanied with seven pages of guidance on its application.
The ethics of AI use is one that should be mandatory in legal education and perhaps continuing competence. AI is not being put back in the box and we have to make sure lawyers know the risks and consequences of relying on it. We can shake our heads at those tempted to ask ChatGPT for help, but are we doing enough to prevent it?
5 Readers' comments