The exponential rise of generative AI is seeing firms move away from the traditional build/buy dichotomy towards greater innovation

Rapid developments in generative AI are shifting firms away from the traditional ‘build or buy’ legal tech procurement choices towards innovation at the interface. 

Joanna goodman cut

Joanna Goodman

This combined strategy involves trialling new legal tech products that allow firms to experiment with generative AI within their existing systems while building their own internal applications on OpenAI’s large language model to avoid challenges around data security, reliability (hallucinations) and intellectual property/copyright.  

This is notwithstanding that OpenAI’s ChatGPT Enterprise platform offers enterprise-grade security and privacy, and increased processing capabilities. Microsoft’s new Copilot Copyright Commitment extends Microsoft’s intellectual property indemnification coverage to copyright claims relating to the use of its AI-powered assistants.

The generative AI buzz has increased both awareness and adoption. LexisNexis’ International Legal Generative AI Report found that nearly 89% of 3,752 lawyers surveyed are aware of generative AI and 43% either use it or plan to use it for legal work. Two-thirds of UK respondents are aware of both its potential and its drawbacks – but many are keen to use it.

Innovation at the interface

Since Travers Smith launched its open source generative AI chatbot YCNBot on GitHub in March, the codes have been downloaded by more than 100 organisations, says director of legal technology, Shawn Curran.

Curran’s ‘build’ strategy is based on the premise that working with large language models takes out 99% of the development work normally involved in building a software product. He says: ‘The AI model is the rules engine, which does all the computation – and that’s what we’re buying. So building with GPT-4 is 99% “buy” and only 1% “build”. And a lot of vendors are building on GPT-3 or GPT-4 so they are only building the last 1% to create their product. If you’ve got an engineer, why not buy the model and get your engineer to build the thin interface lawyer on top of that sophisticated AI capability?’

'AI can optimise human productivity, which is the core asset of a professional services business, so we should be figuring out how to [use AI to] create competitive advantage, rather than outsourcing it to third parties'

Shawn Curran, Travers Smith

While Curran acknowledges that generative AI is becoming embedded in mainstream legal tech, he considers most solutions are relatively narrow: ‘Vendors are taking a sophisticated large language model and using a fraction of its capabilities to build a specific product. But as it’s relatively easy to build a thin layer on top of a large language model, it makes sense to build an entity which provides flexible access to a sophisticated model. Our latest product, Analyse, provides the ability to create a project, add content in multiple formats, set up the model and interrogate it using a series of prompts. For example, a capital markets lawyer could verify claims in a prospectus by interrogating the company’s financial statements. It gets around possible hallucinations by highlighting the reference being used for verification, or noting that further checks are needed.’

Shawn Curran

Shawn Curran, Travers Smith

Curran focused on ‘innovation at the interface’ because generative AI platforms on their own cannot be differentiators. ‘AI can optimise human productivity, which is the core asset of a professional services business, so we should be figuring out how to [use AI to] create competitive advantage, rather than outsourcing it to third parties.”

At Mishcon de Reya, head of data science and analytics Daniel Hoadley agrees that building on large language models helps law firms understand their capabilities: ‘There is a generative AI playbook that law firms are following: create a secure and private environment to explore its transformative potential, build a private version of ChatGPT that stays within the perimeters of the firm, and a conversational agent to interrogate documents.

‘We identified low risk, maximum reach use cases and built a private ChatGPT and a system for uploading documents and interrogating them in a conversational way,’ Hoadley says. ‘We also control, observe, and audit how they are used.’ Like Travers Smith’s Analyse, Mishcon’s applications were built on GPT-4, but are model agnostic. He adds: ‘We evaluate models based on four elements: performance, cost, security, and ethical and environmental credentials.’ Hoadley focuses on deploying generative AI responsibly, so that people with access to the applications understand what they are good at and what to use them for, which is a ‘massive, multi-team effort’.

AI and ESG

Law firms’ tech-enabled hybrid working supports their ESG (environmental, social and governance) programmes by reducing office use – and sometimes office space – and business travel. This sits uneasily with the boom in generative AI, which uses a spectacular amount of computing power. Another element in the ESG equation is a resurgence of ‘Green IT’, which includes minimising the amount of printing and the number of printers, switching to cloud systems and applications and recycling legacy hardware. Vendors too are supporting sustainability. Workplace technology vendor Agilico’s ESG strategy includes a commitment to net zero by 2030 and a hardware recycling and refurbishment scheme which reduces waste and supports a circular economy. While digital transformation is reducing firms’ carbon footprint, the rush to embrace generative AI is writing a different story.

Exploring all the options

Stuart Whittle, Weightmans’ chief technology and innovation officer, highlights cost as a key element in the build/buy dichotomy. He observes that while OpenAI is inexpensive to access from a developer perspective, once you start exploring vendor offerings the cost of experimenting with large language models soon builds up. ‘It may make sense to use Microsoft 365, but a $30 (£24) a month Copilot licence for each of Weightmans’ 1,500 lawyers would require an additional annual IT budget of $540,000 (£432,000).’

As a former lawyer, Whittle has issues with the most popular use case – legal research. ‘Large language models aren’t search engines,’ he points out. ‘If you ask an LLM the same question multiple times you don’t necessarily get the same answer, but lawyers are looking for a definitive answer. Or you might have a query where there is no legislative precedent, so there is no obvious answer.’ However, he is interested in exploring Thomson Reuters Microsoft Copilot integration for document drafting, using the firm’s SharePoint intranet as a content repository. ‘Our innovation team is working on identifying impactful use cases for testing a number of different solutions,’ he adds.

At Addleshaw Goddard, Elliot White, head of innovation and legal tech operations and Michael Kennedy, senior manager, innovation and legal technology, are implementing a comprehensive build and buy programme, which involves piloting five generative AI products before deciding which one(s) to implement. At the same time, they have introduced their own version of ChatGPT within the firm and built an internal application, AG lease review, which includes 15 prompts.

The plan is to invest in external tools alongside their own applications, focusing sharply on the user interface – because UX (user experience) drives adoption. White explains that part of the rationale for developing their own applications was to understand the model, and to train users to interact with generative AI. Prompt engineering is a new skill – for lawyers as well as the innovation and legal tech team. ‘As we start to understand the model and its nuances, we can work out what to use it for. We put a lot of thought into the questions we ask – there’s a fine line between good and poor outputs and we are training our lawyers to ask the right questions,’ says Kennedy.

Building towards productising

On 12 September Dentons launched generative AI platform fleetAI which is built on the OpenAI platform. Users can upload documents and select a chatbot to interrogate it – they can choose between chatbots using Microsoft APIs (application programming interface) for GPT-3.5 and GPT-4. As Joe Cohen, head of innovation, explains, client information is allowed on the platform (apart from a list of 10 clients who do not want their information stored on any cloud platform) ‘because of the work we’ve done with Microsoft on data privacy’.

Use cases include generating legal content (for example, clauses), and conducting legal research and analysis. Using GPT-4 it can summarise, analyse and interrogate complicated documents, answer classic legal questions even when they include making value judgement (that is, what is the best way to…?). The platform does not have legal training, and the data is not used to train the model, although Dentons has built in anti-hallucination features. The user interface is straightforward, with guidance around prompts, enabling it to be made available to 1,800 people.

When it comes to build vs buy, Cohen explains: ‘Previously if I wanted legal AI, I would have to buy Luminance or Kira. Now I have access to an API that is 10 times more powerful at a significantly lower cost. We started building anticipating the release of the GPT-3.5 API, and now with the GPT-4 API too, we have enough throughput to analyse around 6 million words a minute.’ Looking ahead, Cohen is planning to roll fleetAI out to other regions (it is currently available only in UK, Ireland and Middle East offices) and to offer it to clients. The front end is ChatGPT – they will need to bring their own API. ‘When our clients see it, they are almost always interested in buying it,’ he says. Productising the first phase of fleetAI (without the contract generation capability) will make AI into a profit centre leveraging the firm’s expertise in law and technology.

Another consideration for LLMs is the possibility that GPT and its competitors may struggle to improve their models as their access to data is restricted by blocking mechanisms and potentially by regulation. Many companies are now blocking ChatGPT from crawling their websites to train future models. This will open up opportunities for legal research platforms to leverage their data. It is also likely to nudge more law firms towards building private large language model interfaces to apply to their own data.

Topics