A pioneer in legal technology has predicted the billable hour model cannot survive the transition into the use of artificial intelligence.
Speaking to the Gazette on a visit to the UK, Canadian Jack Newton, founder and chief executive of lawtech company Clio, said there was a ‘structural incompatibility’ between the productivity gains of AI and the billable hour.
Newton said the adoption of AI should be welcomed and embraced by the legal profession but that lawyers will need an entrepreneurial mindset to make the most of its benefits.
‘You can’t have the benefits and time savings that AI is delivering co-exist with the billable hour model,’ said Newton. ‘It has become increasingly out of date and at risk for decades. The model creates an incentive for inefficiencies and the legal profession is almost the only one that explicitly rewards inefficiency. Something that used to take you five hours will now take five minutes, and you need to justify those four hours and 55 minutes you have given up because it is no longer reflective of the value lawyers are giving to clients.’
Newton said Clio’s research had shown that clients prefer predictable pricing and that the best firms were already marketing themselves on that basis. While regulators have lamented for years the unmet legal needs of so much of the population, AI was now an opportunity for forward-thinking firms to capture that market.
Read more
Newton added: ‘There is enormous demand but the paradox is that the number one thing we hear from lawyers is they need to grow their firms through more clients, while 77% of legal needs are not met.
‘It’s exciting that AI can address these challenges – it will be a tectonic shift in the industry driving down costs and making legal services more accessible.’
While AI can improve efficiency, Newton urged caution about lawyers relying on generative tools such ChatGPT to carry out legal work. He said: ‘Generative AI can be an extremely convincing liar. It doesn’t know that it’s necessarily telling you something incorrect and it’s trying to please the person making the enquiry, which makes it the most dangerous type of liar.
‘You have to treat AI tools like a first year associate and check their work. Except an associate would never make up cases and URLs unless they are trying to get fired as soon as possible. AI is utterly convincing in its presentation of completely incorrect information but it is trying so hard to please you and to predict an outcome it hopes exists. The associate would never state that something is true with so much confidence.'
16 Readers' comments