Generative AI is transforming legal work. That has potentially significant ramifications for professional negligence claims and how solicitors insure themselves

In his closing keynote at Lawtech UK’s generative AI event on 4 March, master of the rolls Sir Geoffrey Vos envisaged a time when a lawyer who did not use AI might be considered negligent, while acknowledging that generative AI ‘might need a little more checking’. Last year, Lord Justice Birss said that he had used ChatGPT to help him write part of a judgment. But also last year, two New York lawyers were famously fined $5,000 after submitting a legal brief containing six fictitious citations generated by ChatGPT in a personal injury case against Columbian airline Avianca. If lawyers may be considered negligent if they do not use AI, yet can also be liable for the consequences of its shortcomings, will the normalisation of generative AI produce a catch-22 for lawyers and their insurers?

Joanna goodman

Joanna Goodman

Emma Wright, who leads the technology, data and digital group at Harbottle & Lewis, says that, to some extent, AI is just another tool, speeding up internal processes for law firms. Once a technology has a critical mass of adoption, it becomes the accepted way of doing something.

‘When I was a trainee, two trainees would compare versions of a document by reading out one version and checking it against the other,’ recounts Wright. ‘Now we have redline comparison tools. Firms had law libraries, now we use Westlaw or legislation.gov.uk.’

Wright, who is also counsel and director of the Institute of AI and sits on the World Economic Forum AI Action Alliance, adds: ‘These tools allow lawyers to work faster and more efficiently, and eventually they become the usual way to do a process.’ Such methods have led to clients asking lawyers about the tech they are using, and querying time-consuming tasks and processes.

Normalising AI

Matt Hervey, partner at Gowling WLG, sits on the City of London Law Society AI committee, which recently held a panel discussion on AI and liability. This featured RPC partner Graham Reid, who specialises in professional negligence claims against lawyers and law firms, and advises insurers on coverage issues. While Reid cannot recall a negligence case against a lawyer for not using established technology, he can envisage scenarios in which this could occur. For example, if a lawyer using a law library, or a Google search, did not find an important precedent on an obscure topic because they did not consult online platforms which are constantly updated, this could damage their client’s case. ‘This normalisation of new processes is happening all the time,’ Reid says. ‘When something new comes along, there are early adopters, and then, after a certain passage of time, everyone is using it. And this is certainly happening in some AI use cases.’

But can the normalisation journey work for generative AI in the same way as it has with previous tech tools? Hervey observes: ‘Up to now, tools that have become commonplace have been designed to do a specific task in an intuitive way. But AI is different in that it’s statistical and it can hallucinate, so there isn’t a straightforward pathway to it becoming the norm.’

'We should really start with [the premise that] solicitors may be negligent for taking the output of AI at face value (whether they have built it or not as AI behaves in unexpected ways) – which if we are being generous, rather like seems to have happened in relation to the [Post Office] Horizon system'

Emma Wright, Harbottle & Lewis

‘Normalisation of technology, even AI technology, makes it difficult to criticise a law firm for using a widely adopted tool,’ says Reid, in response to Hervey referencing Pyrrho Investments Ltd v MWB Property Ltd. This was a 2016 case where the court approved the use of predictive coding in the electronic disclosure process. ‘However,’ he adds, ‘there is an aspect of risk transfer which is not being addressed. If the court says we have a giant corpus of documents to search through and we’ve all agreed on the search terms, and you then miss something salient, pointing to the court approval, and saying the court agreed it was an imperfect process will go a long way towards addressing quite a lot of the risks associated with using AI. The tools are only as good as the input.’

Emma Wright

Emma Wright, Harbottle & Lewis

The same applies to more conventional tools. For example, scanning documents into litigation support systems and using OCR (optical character recognition) software to read them. Reid explains: ‘If parties in a case agree on the search term “fraud”, and a document is missed because the OCR misread fraud as “food”, and as a consequence the defendant loses the case, the judge won’t be interested in blaming anyone as it will be seen as a computer glitch, and OCR had been adopted as an acceptable – not perfect – solution.’

While theoretically it is possible to be considered negligent for failing to use technology, in practice this is unlikely. Once a tool becomes customary, it is even quite hard to be considered negligent even when the accepted technology repeatedly fails/goes wrong.

Wright sees this as potentially problematic. ‘The problem with Sir Geoffrey Vos’s statement is it rather assumes that the output [of generative AI] is objective – a little like a calculator adding up numbers – when we know that this is simply not the case with AI. So we should really start with [the premise that] solicitors may be negligent for taking the output of AI at face value (whether they have built it or not as AI behaves in unexpected ways) – which if we are being generous, rather like seems to have happened in relation to the [Post Office] Horizon system.’  

This was reflected in the Avianca case, where lawyers were fined not for using ChatGPT but for using its outputs without checking them. Conversely, Lord Justice Birss made it clear that he had checked what ChatGPT wrote for him. ‘All it did was a task which I was about to do and which I knew the answer and could recognise an answer as being acceptable.’ While adopting AI and automation is making legal services more efficient, it is crucial that there are checks and safeguards, given that automation (gone wrong) was at the heart of the Horizon scandal, which is considered the widest miscarriage of justice in UK history.

High street bots

LawtechUK’s Access to Justice event and LTUK Ecosystem Tracker report highlighted lawtech’s untapped potential. Only 7% of 356 lawtech companies cater to the consumer market, where, according to the Legal Services Board, 3.6 million people have an unmet legal need and 1.8 million small businesses deal with legal issues on their own. It is clear that more of the £1.38bn total investment in lawtechs needs to be directed at companies serving the B2C and SME markets.

 

Earlier this month there was a step in the right direction, as UK-based start-up Lawhive raised £9.5m in a seed funding round led by Google Ventures, with participation from existing investor Episode 1 Ventures. This follows the £1.5m it raised in 2022. Lawhive targets high street law firms. Its AI paralegal, Lawrence, which is built using Open AI’s ChatGPT-4 and Lawhive’s own in-house tech, carries out routine work which is then checked by a lawyer.

Insurance conundrum

'If a chatbot giving advice in the early stages of a personal injury claim gets the limitation period wrong and says it’s four years rather than three years, and a whole bunch of people miss the deadline because of the chatbot, the insurer will not be able to treat all those claims as a single claim'

Graham Reid, RPC

The risks involved in using AI, particularly generative AI, also raise questions about insurance. Professional indemnity insurance (PII) does not cover technology – it covers legal advice. ‘If a client says, “we don’t want legal advice, just give us the output of the tech”, it’s a different service,’ says Wright.

Reid has identified potential insurance-related issues here. ‘SRA-authorised firms need to have mandated minimum cover,’ he says. ‘However, the main insuring clause includes prominent use of the phrase “as a solicitor or registered European lawyer”, so it reads as if there has to be a person involved in the provision of services to guarantee cover. So there is a case for saying there is no cover if the service provision is wholly AI-delivered, because there is no person in the loop.’ Another potential misconception is that although PII was changed to include elements of cyber risk, this covers hacking and not anything related to AI use.

Graham Reid, RPC

Graham Reid, RPC

The most significant insurance issues relate to the aggregation clause which allows the insurer to aggregate a series of similar claims into a single claim. This does not work for generative AI due to its lack of explainability.

‘For example, if a chatbot giving advice in the early stages of a personal injury claim gets the limitation period wrong and says it’s four years rather than three years, and a whole bunch of people miss the deadline because of the chatbot, the insurer will not be able to treat all those claims as a single claim, under one limit of say £3m,’ explains Reid.

‘The aggregation clause has not changed since 2006/07 and it’s an important part of how insurers assess their exposure, so a lot of money can turn on it. The more firms adopt AI, the less insurers will like that, and if they can’t change the language, they will either stop insuring solicitors or they will hike their premiums.’

Firms using AI can de-risk by buying extra insurance. Reid says: ‘As law firms will not be able to pass on risk to their clients, or technology suppliers, who will have a robust set of disclaimers, the solution will be to take out additional insurance. This is not SRA professional indemnity insurance, which responds if somebody sues you, but rather first-party cover for any loss or harm incurred, including if someone sues you. However, the insurer has to be careful about pricing that risk. It’s effectively saying if you want to use ChatGPT, we are aware of its propensity to make mistakes, so you can buy this specific insurance. If there’s a market for that kind of cover, and it can be delivered at a sensible price, it will be hugely desirable for law firms, and everyone else.’

 

 

Topics