At the end of 2017, 40 of the top-100 law firms were reported to be using artificial intelligence (AI) on corporate matters, automating aspects of M&A due diligence, e-discovery and legal research, and predicting case outcomes. The biggest growth in AI-powered lawtech start-ups was in intelligent contract analytics.

As ‘legal AI’ becomes more available it is levelling the playing field, giving mid-market and smaller firms the scalability that allows them to compete in new markets as well as creating new business models to make legal services more accessible. ‘Chatbots’ offer individuals and small businesses free legal advice on their rights. They help with everyday matters such as parking fines and other local services, low-value contractual issues and disputes.

The rationale for introducing these new technologies is predicated on client expectations and market competition. Corporate clients want ‘more for less’ in terms of added value, cost-effectiveness and transparency, and market entrants such as the Big Four accountancy firms (see p19) and other alternative business structures are using advanced technology to deliver fast, effective services. But what of corporate legal departments? Are they driving change purely from a cost perspective or are they also looking to AI as a transformative technology?

AI and machine learning

Legal AI combines machine learning with natural language processing. Its main benefits, particularly in respect of due diligence and contract analysis, are speed, consistency and scalability. Martin Blackburn, sales director of Luminance, explains that the main difference between traditional rules-based search and analysis, and AI software is machine learning. This means the software can read a contract and understand it rather than search for strings of words. ‘Keywords and rules have limitations that make them harder to use,’ he says. ‘For example, you have to know what you’re looking for before you start the search. But the risk in a collection of contracts is likely to be in the “unknown unknowns”.’

AI software (such as Luminance) can classify documents without knowing what it is looking for – or looking at. It does this by discerning similarities and differences between different pieces of data. Blackburn uses the analogy of teaching a child to recognise a cat by showing it images of cats rather than explaining how a cat is different from a dog. The child would soon learn to distinguish between cats and dogs even though they have common features. In the same way, Luminance can be taught to automatically classify large volumes of documents. ‘The technology figures out the similarities and differences, and [human] experts subsequently apply labels to the various categories that have been identified. For example, it would be able to distinguish between employment contracts and sales agreements.’

Anthony Kenny, assistant general counsel at GSK, adds that this type of self-learning contract analytics needs to be trained on a robust data set. Blackburn says that AI software can be trained to identify anomalies – items that bear similar hallmarks to the rest of the dataset, but are sufficiently different to pique the reviewer’s interest.

Legal AI in-house

Trevor Goodman, EMEA head of legal at Legg Mason, leads an in-house legal department that recently partnered with an intelligent contract analysis start-up. He read an article about the company and decided to find out more. ‘The product was in beta, so we were able to contribute to its development. It was applying AI to non-disclosure agreements and straightforward master agreements, but the bulk of my department’s work is complex distribution agreements which can be 30 or 40 pages long and in different languages.

‘The initial proof of concept arose because the third AML [Anti-Money Laundering] Directive was coming into effect. We have 600-odd distribution agreements of which about 300 are active at any one time, so it would take us a good few weeks to crunch through them all and produce a report. The technology took between 10 and 30 seconds to upload each agreement to review and we achieved a 96.4% accuracy rate.

‘At the front end, we had to break down the agreements into a series of questions that the machine learning could look for. The final report produced various datasets and, again, the best lawyers are the ones that ask the best questions and have great analytical skills, who can take a lot of data, draw the inferences out and communicate them.

‘The technology augmented what we did and saved time. In terms of datasets, we needed 50 agreements uploaded to the system to have enough information to start drawing inferences. The more you upload the more sophisticated the review becomes.’

Goodman and his team selected a bespoke solution for his organisation. He explains that although there are specialist legal AI products for M&A, e-discovery, IP and risk management, he could not find an enterprise-wide AI solution. This raises challenges around bolting together bespoke products and integrating them with legacy systems and platforms.

With so many contract analytics vendors flooding the market, it is important to consider who to partner with and evaluate them carefully so that you can be sure they are going to be around for the next few years.

Working with start-ups

Legal AI only hit the mainstream a couple of years ago (although the concept has been around since the 1980s) so many AI vendors are start-ups. The lawtech start-up dynamic has galvanised the market, but working with start-ups presents challenges and risks as well as opportunities. Start-ups are dynamic businesses. They have a high failure rate and often start with an exit strategy of being acquired in order to produce a return on investment. Obviously, the change in ownership affects users of the technology. For example, RAVN’s acquisition by document management software iManage is great for existing iManage customers because it gives them the opportunity to use an AI engine that integrates with their standard document management system (DMS); but it may present issues for RAVN customers using another DMS. If a start-up fails, customers may have to think about how they are going to support the software they have committed to and may depend on as part of their IT architecture.

As Goodman explains, the trade-off is that working with a start-up enables customers to collaborate and participate in developing the technology and making it fit their requirements.

AI-powered triage

Not all legal AI is about contract analytics. Intelligent triage is another important development for volume and in-house legal work. This is an intelligent decision tree for distributing work appropriately. For corporate lawyers, it can also be used to decide whether to handle an issue in-house or whether to appoint an external law firm.

Weightmans partner Rob Williams, the sole private practice lawyer at the roundtable, believes there is a need for a standard platform or one which uses standard algorithms. ‘Most of what we do as lawyers is about triage and decision-making. At Weightmans, we focused our AI efforts on data extraction and decision support. We are running a few proofs of concept, but we are not a software house so we need to collaborate and co-found. And we need to involve our clients. If you’re client-centric, innovative and commercial, you have to be in the innovation space. And innovating alone risks designing something that’s fit for nobody. We are working on a platform that will start in one domain – personal injury – and be transferable to others.’

New business models are an important element of legal innovation. But where does that leave the traditional legal business model? ‘Perhaps we are approaching the problem from the wrong angle,’ Kenny suggests. ‘This technology is giving us the opportunity to question what we’ve been doing and ask ourselves whether we need even more radical change, and to look at how new technology can create new business models rather than fitting it to the current business model.’

Williams agrees: ‘We have to think about the future and imagine what we would do if we were starting afresh. We read articles about what’s happening in China, where they say they were born digital as opposed to playing catch-up. We are trying to get there as quickly as we can and not being conservative about how we do it. However, there are issues around governance and we also have to progress at a pace the organisation is happy with.’

Training ‘NextGen’ lawyers

‘One of my AI drivers is to remain relevant to my team,’ says Goodman. ‘I also need to look at the skill-sets that we will need. The UK is behind the curve in terms of giving law graduates tech skills and ensuring they understand AI, blockchain, smart contracts and coding. Although lawyers don’t necessarily need to code, they need to speak the language to be able to talk to the people who do. The US and Australia are ahead in terms of training and, without that foundation, it will be difficult for us to remain competitive. I’m trying to equip my team with some skills and show them proofs of concept.’

‘The world of the modern lawyer is changing and AI is impacting on that,’ adds Obelisk’s Debbie Tembo. ‘We need to go back to education and consider how we train lawyers in the competences that lawyers will need today and in future.’

Williams agrees: ‘When I started as a trainee, the partner I worked for believed in leaving the firm in a better state than when you joined. We need to pass on a good baton to the next generation.’

Automation, algorithms and chatbots

Natalie Jobling, former general counsel, corporate, at Network Rail and now a consultant to in-house legal departments, envisages AI reaching a point where people with common problems may not need a lawyer at all. ‘They may need someone who looks like a lawyer telling them the answer because they want to hear it from a person rather than a machine, but the work in the background will be automated,’ she says. ‘Lawyers will need to concentrate on the complexities, looking at the outputs of the machine and deciding which anomalies are the ones that matter.’

‘We have that now,’ observes Kenny. ‘Law graduates are disrupting the industry by creating algorithms that deal with triaging and basic legal work.’

Some of this is designed to cover unmet need for legal advice. Paul Cummins, head of legal at Milton Keynes Council, explains how automation and chatbots help the council to handle customer queries. ‘Chatbots are better than humans at triaging, getting straight back to customers and directing them to the best person to deal with their query. Milton Keynes Council did use a chatbot and they are now considering reintroducing it. It was withdrawn because people were asking it inappropriate questions and these conversations were hitting the local media.’

This represented a reputational risk for the council and highlights that, at least for the moment, legal AI requires supervision and curation. However, Cummins reiterates that automating responses is particularly useful when people call in looking for straightforward, factual information.

Power Networks, which also delivers a public service, uses AI. ‘We have a few automated vehicles that travel around London finding faults on the network,’ says its commercial solicitor Amy Chesson. ‘The vehicles are supervised so that nobody steals or damages them. I would like our legal team to embrace the same spirit of innovation and look at AI beyond contracts.’

In-house legal departments need to consider different dimensions of the AI discussion that are one step removed from law firms and legal service providers: how to apply AI to the legal function, and how to work with other parts of the business that are using AI, both in terms of shared innovation and in the context of corporate and legal risk.

Risk and reward

There are other risks for AI supervision and transparency, as well as considerations around working with start-ups. Jeremy Aron, general counsel at packaging giant DS Smith plc, has concerns about the limitations and risks of applying AI to in-house legal work. ‘I can see its application to bulk contracts, but part of the challenge – and the attraction – of working in-house is that you never know what kind of problem will hit your desk,’ he notes. ‘Lawyers are naturally cautious so hearing about the uncertain financial stability of some AI providers will create challenges.’

Goodman acknowledges this, but drives home the reason for exploring the possibilities around legal AI. ‘My rationale was to free people up,’ he says. ‘We are being asked to do more for less, and the world is becoming more complex, particularly in financial services. If you can standardise the bulk stuff and get it done quicker and more efficiently, you free up time for value-added work. Another driver is that the in-house role is increasingly about analysing and assessing legal risk. It’s not just about volume. It’s about complexity too.’ He also highlights the advantage of start-up agility around developing legal AI applications. ‘It has been interesting for us because we got in early and [the provider was] looking to move on from NDAs and three-page agreements.’

Williams observes that, even in the volume space, much time and resource is invested in training the machine. This raises questions about data and IP ownership of the output. Although you can buy AI software that has some generic training, what differentiates it from traditional software and makes it bespoke is that it has been trained on your organisation’s data, and rather than using it intuitively, it has to be supervised and given feedback. ‘AI is not a magic wand – it’s just a tool in the armoury of businesses that are competing in a dynamic, rapidly changing market,’ he says.      

Goodman is looking for more collaboration between private practice and in-house. ‘I am looking for the law firms that we work with to talk about the tech they are using and suggest better ways of working together. I have not had any law firms come to me and say: “This is what we can do for you and this is how it will cut your bills”.’

Williams has had this type of discussion with Weightmans’ clients: ‘We want to collaborate and co-found. There has to be some skin in the game for everybody because we haven’t got deep enough pockets to do it all ourselves.’ However, Tembo observes that not all corporate legal departments want their law firms to offer them technology – rather they are looking to change the business model.

Another consideration when it comes to developing AI software is the lack of standardisation between corporate legal departments. Different organisations have different ways of approaching the same type of contract. The data produced by applying this type of technology will drive debate and change.

AI and pricing

Change is driven by the expectation that AI software will bring cost savings to in-house legal departments. Blackburn observes that economic pressures and the move away from billable hours are key drivers of change. ‘Fewer clients are prepared to pay billable hours for due diligence and firms are moving towards fixed pricing for this. It is also down to software vendors to try to be innovative in terms of pricing and move away from traditional approaches. Cloud computing has led to a greater expectation of pay-as-you-go pricing, for example,’ he says. ‘The market is surprisingly price-sensitive, and because Luminance is generic, universally applicable software, it supports pay-as-you-go, pay-for-access pricing options.’

Pricing needs to be competitive because law firms and legal departments expect to pass on cost savings gained from automation to their clients, while preserving or even increasing their own profit margins.

However, this is not a straightforward calculation. ‘Automation will reduce costs, but only when you are dealing with sufficient volume,’ says Williams. ‘If automation were applied to a small document set with just a few documents, clients would still expect a reduced price, but you would still have to factor in the investment in technology innovation and training. You have to decide whether to charge per click, per use or across a block contract. I don’t think the market has worked that out.’

‘As in-house lawyers we have to think about what we mean by value,’ says Kenny. ‘Crunching through the due diligence is low value, but the interpretation of the findings and the subsequent advice is the value that the lawyers bring to the table.’

Supervision and transparency are important complications, particularly in the long term. Currently, the people training AI software and reviewing its output have done the task that the technology is doing. The next iteration of people training the machine and reviewing the output will not have done the original task. This raises the business-critical issue of effective supervision. Without effective supervision, AI software brings in a plethora of business and ethical risks associated with automated decision-making.

Williams suggests that this is exactly the dilemma that has led to the concept of legal engineering. ‘At the moment, people are checking the output of AI software, but in future maintenance will be around servicing the model and the technology rather than the work itself,’ he explains.

Cummins adds that automation reduces risk around the quality of output as machines do not get tired. They read and interpret all content consistently, so quality becomes a quantified risk, rather than an unknown. Blackburn says that output quality is improved by machine learning, training and feedback.

Chesson believes that technology will also transform the way people learn, and this will include learning to work effectively in partnership with automation. This will change the way junior lawyers are trained and the work they do. Deciding where and when human input is required also ties into legal and compliance responsibilities, which are an important part of in-house legal work.

Blackburn acknowledges that AI and automation raise concerns around governance and best practice, but also risk and liability: ‘Fortunately, as humans we’re sceptical enough that there isn’t just an acceptance that the computer can do a better job than humans, or that the computer will spit out some advice and we will blindly follow it. But progress is inevitable and technology will continue to nibble away at what we do as people.’

He adds: ‘At the moment, I think there’s a degree of acceptance that we’re prepared to use technology to try and distil what’s important. The volume of digital documentation continues to grow and that’s an issue that we can’t deal with as humans, so that’s where technology can help us. But it certainly hasn’t caused the ethical and governance questions to disappear into the background. ’

Find the gap

In-house legal departments are in a similar position to law firms when it comes to adopting AI software: it has to fit in with business needs and make financial sense. There is an additional dimension of advising and learning from other elements of the business that use emerging technology. This means adopting a coordinated strategy. For example, Goodman has established a technology strategy group within Legg Mason’s legal team. Williams also emphasises the importance of having the right people in the team: lawyers may not know how to code but Weightmans’ innovation team includes data scientists, data analysts and coders. ‘It’s not a legal team; it’s a business team,’ he says.

Ultimately, it is about applying technology to improve customer service, says Chesson, and augmenting lawyers rather than talking about replacing them. She quotes a French phrase, ‘cherchez le créneau’, which is about identifying what people want (from a law firm or legal department). ‘People are emotional, and plenty would rather speak to a lawyer, even though they might be wrong, than deal with a machine that will definitely be right. So, in future there will be lawyers around, and I think as many jobs will be created as there will be taken away, but AI will be good for augmenting, on a very high level.’

The final comment, from Blackburn, acknowledges that although technology can be a great enabler, it also raises business and ethical challenges. ‘It’s reassuring that there’s quite a healthy dose of scepticism, and that people are willing to dip their toe – we are not seeing a wholesale rush to automation.’ Part of seeking legal advice is the reassurance of talking to a lawyer, rather than a computer. ‘While technology will continue to widen its capabilities, we are still some way away from not needing humans as lawyers, and that’s a good thing.’

Joanna Goodman is the Gazette’s IT columnist and author of Robots in Law: How Artificial Intelligence is Transforming Legal Services

This roundtable was kindly sponsored by Luminance