Scepticism in some quarters has not stopped generative AI being embraced as a game-changer for legal services delivery. Its adoption is happening faster than many think

Chatbots are still the hottest topic in legal tech. As the legal and legal tech communities have been experimenting with OpenAI’s GPT-3 (Generative Pre-trained Transformer) large language model, generative AI has been the subject of multiple discussions and webinars exploring its opportunities and threats for legal services and lawtech. We have even seen a few new legal tech products powered by GPT-3.

Joanna goodman cut

Joanna Goodman

This is not surprising as the public version, ChatGPT, is the fastest-growing consumer app in history. It was released on 30 November 2022, and within two months it had an estimated 100 million active users. But as competition increases in this space, generative AI has become a target for regulation because of concerns that it could be used for plagiarism (particularly in educational assessments) and to spread disinformation.

When it comes to AI in legal services, digital ethics issues apply to the narrow AI already in use – for e-discovery, legal research, contract analysis and automation, and client onboarding. These apply equally to generative AI models with probabilistic algorithms.

As mainstream technology is rapidly shifting towards generative AI, it is an important development for lawyers and law firms, which are almost all Microsoft users. Microsoft, which increased its investment in OpenAI last month, has already started introducing GPT-3 into its products. On 1 February, it rolled out a premium Teams offering powered by GPT-3.5, which can generate automatic meeting notes, recommend tasks and create meeting templates. According to Reuters, Microsoft aims to add ChatGPT into all its products.

And there is already a waiting list for a ChatGPT version of its search engine Bing, where the chat functionality makes it possible to produce search results in plain text rather than a series of links, and enables you to refine your search by asking follow-up questions.

Controlling inventive algorithms

This would be incredibly useful, certainly for legal (or any) research, except that when large datasets are involved generative AI models become less reliable. If the information that ChatGPT is asked for is not available or accessible, the algorithm creates a plausible alternative. In one test reported on LinkedIn, it invented legal precedents that did not exist!

GPT

This is already a challenge for the next generation of search engines. Google parent Alphabet lost $100bn in market value after its new chatbot, Bard (powered by Google’s generative AI model LaMDA, which is built on the same Transformer language model as ChatGPT) gave a wrong answer during its first demo on 6 February, notwithstanding that Google has given no indication about incorporating Bard into its search engine.

This propensity for invention relates to the algorithm. In the New Yorker last week, science fiction author Ted Chiang described generative AI models like Chat GPT as the equivalent of a photoshop blur tool for paragraphs instead of photos. That is, the algorithm prioritises proximity rather than accuracy to produce cogent texts and summaries. Therefore, it can already be used effectively if you apply it to verified datasets and ask it the right questions. This explains why GPT-3 can pass the US Bar exam, summarise a precedent or other document, and take meeting notes. But it is less successful at answering straightforward questions, or doing simple mathematics, with accuracy falling to below 10% when calculations involve five-digit numbers.

As ChatGPT generates conversational text in response to questions, it can easily be applied to legal documents and defined processes. The commercial API (application programming interface) GPT-3 and GPT-3.5 makes it relatively easy to integrate into existing programmes and processes. The key is to ask it the right questions. And in just a few weeks, several new GPT-3 legal applications have emerged.

Last week saw the first Barclays Eagle Labs in-person event since 2019, co-hosted with LawTech.Live. A panel discussion on tech trends in law firms focused on: choosing contract lifecycle management products, especially in document automation, where there are over 200 offerings; the perennial problem of managing/leveraging data; and legal operations.

 

The general view was that legal tech needs to pivot from lawyers’ tasks to business/outcomes. This is reflected by law firms increasingly investing in legal operations – as a function, and as a career path. Billie Moore, knowledge and innovation manager at Slaughter and May, outlined the firm’s legal operations graduate training programme, and legal operations consortium, whereby legal operations executives are trained by the firm and seconded to client organisations.

 

This may also be part of a response to a reduction of in-house legal roles, perhaps due to the economic slowdown. The panel agreed that this may be the year of legal operations bringing business and tech skills to law firms and their clients.

Chatty co-pilots

When AI was first applied to legal, in some ways it failed to live up to expectations because it was too narrow. Although it fulfilled specific tasks and functions efficiently, questions were continually raised around its limitations. Generative AI is significantly broader, creating cogent text based on a prediction algorithm, but it is less focused. However, because it is focused on text – and legal tech mostly relates to documents – law is an obvious target market. OpenAI recognised this early on. One of its OpenAI Startup Fund’s early investments was Harvey, founded in San Francisco by research scientist Gabriel Pereyra and lawyer Winston Weinberg. Harvey is an intuitive interface for legal workflows, which its website describes as a ‘copilot for lawyers’. Magic circle firm Allen & Overy made it into the Financial Times’ ‘best-read’ stories yesterday after announcing that Harvey is rolling out across its offices for legal research and document generation.

'We were chatbot-first, so we had a natural interest in ChatGPT. We have combined the two technologies to create powerful contract summaries'

Tom Dunlop, Summize

Recent weeks have seen steady stream of generative AI offerings from mainstream legal tech vendors, as well as agile startups and scale-ups. GPT-3 applications are being developed to help lawyers speed up routine tasks and processes. Contract lifecycle management (CLM) provider Ironclad’s AI Assist uses GPT-3 to enable in-house legal teams to use pre-approved clauses to instantly generate redlined versions of contracts which appear as tracked changes in Word. Here in the UK, CLM solution Summize uses GPT-3 to help corporate legal teams create ‘super summaries’. Founder and CEO Tom Dunlop explains that while Summize’s core technology contextualises contract information by extracting key dates, locations, numbers and terms, GPT-3 is used to create natural language explainers. ‘We were chatbot-first, so we had a natural interest in ChatGPT. We have combined the two technologies to create powerful contract summaries.’

US chatbot-first developer, LawDroid, whose no-code platform helps law firms build chatbots, used GPT-3 to create Copilot, a multi-functional, conversational assistant for lawyers. As CEO and founder Tom Martin explains, a ChatGPT-style interface with built-in prompts helps lawyers conduct legal research. It has search functionality that allows follow-up questions, creates summaries, corrects grammar, translates text into other languages, and drafts email templates. It can also be used to complete client onboarding forms. Unlike other legal GPT products, Copilot makes use of some of ChatGPT’s inventive qualities too, with options to ‘brainstorm blog ideas’ and even ‘chat about anything’, a kind of emotional support function! Copilot has been used by a judge in Louisiana to create a searchable knowledge base for criminal defendants.

DoNotPay’s ‘nothingburger’ challenge

DoNotPay’s attempt to get the first robot lawyer into a US court via a GPT-3 app turned out to be mostly a publicity stunt because it would have breached court regulations. Subsequently, DoNotPay founder Joshua Browder announced his intention to pivot away from legal services and focus on consumer rights. However, DoNotPay hit the headlines again on 13 February. A paralegal in Seattle filed a complaint about the company’s claims regarding its use of AI, leading to concerns over whether this challenge (which Browder described as ‘a bit of a nothingburger’ in an interview for Bob Ambrogi’s Above the Law podcast) discredited the use of AI in justice tech. Nicole Bradick, CEO and founder of Theory and Principle, which designs and develops tech products, including for non-profits, kept her feet firmly on the ground. She tweeted that it was not clear whether DoNotPay’s recent activities had any impact on its users or investors. ‘Does any of this have ANY relevance whatsoever to Justice tech, legal tech, lawyer regulation, or even the law?’ she wrote, ‘because the effects may … be marginal at best’.

Although DoNotPay’s marketing strategy is controversial, its AI-powered chatbot operates solely in the consumer space. It generates responses for people looking to challenge parking tickets and excess charges and close unwanted subscriptions. It seems that while generative AI is recognised as a game-changer for legal services delivery, particularly by corporate legal and larger firms, some of which are already recruiting GPT prompt engineers, in the US at least, there seems to be a push-back against its use in justice tech – the online apps that help consumers understand and assert their legal rights against local authorities and large corporations.

Topics