Last week the EU passed the AI Act, representing a landmark moment that will shape how artificial intelligence is developed in Europe, and it will have repercussions beyond. The act will implement protection against some of the fundamental issues we have already seen with AI, such as transparency, bias, and accuracy, and could serve as a blueprint for other regulatory authorities worldwide.

Kriti Sharma

Kriti Sharma

Source: Thomson Reuters

Taking a risk-based approach is a smart first step by the EU Commission, as it sets up a framework to serve as a starting point for determining what level of regulation should be implemented. By seeking a 'balanced and proportionate approach', we can provide a minimum level of oversight while not stifling technological development.

However, in the UK, we still have some way to go to implement regulatory guardrails that can instil trust and drive widespread adoption. We cannot afford to run AI that humans cannot safely trust, so how do we put in place the right guardrails knowing we might not yet have all the answers? 

In the extremely competitive and labour-intensive legal profession, the successful integration of AI tools present unprecedented opportunities for law firms and the wider legal system. When used correctly, AI has the potential to allow lawyers to quickly and efficiently manage tasks such as document review and drafting, freeing up their time to focus on higher value work such as strategy and ideation. And the potential benefits go even further, such as helping to improve access to justice.

According to research that we undertook last year, 47% of legal professionals expect increased productivity with the integration of AI into legal work, while 64% of professionals predict that over the next five years their skills will be more highly prized with expanded AI use. 40% of legal professionals believe that with mundane elements removed, they will have more time to focus on higher value tasks. Our research also found that 80% of corporates want law firms to use AI.

More broadly, AI will have a potentially transformative impact on the legal profession, leading to an evolution in traditional career paths, skills sets and points of entry, as well as driving diversity and access. It can also help address the personnel challenges common to all professional work that sees fatigue and overwork driving talented professionals away from critical career paths. 

AI adoption in the legal sector is happening now, and companies which build and apply AI have a key role to play in building trust, with many proactively creating their own guidelines. For example, Google and Anthropic are drafting 'AI constitutions' that outline values and principles for their models to follow, aiming to mitigate the risk of misuse. At Thomson Reuters, we have our own Data and AI Principles to promote trustworthiness in our continuous design, development, and deployment of AI and our use of data.

What concerns some is stories such as the New York lawyer who submitted legal arguments generated largely by ChatGPT that contained 'bogus judicial decisions … bogus quotes and bogus internal citations', which fundamentally relate to a perception of irresponsible use of AI, that will become detrimental to AI adoption and our ability to unlock its benefits.

This is why we need thoughtful standard setting now. UK legal practitioners would benefit from guardrails, in the form of regulation around the safe use and application of AI. Doing so will help establish accountability in the AI ecosystem and give businesses the clarity they urgently need to drive AI adoption.

What would AI regulations look like? The EU's landmark AI Act will go a long way to helping to set a standard for a balanced and proportionate approach, which can provide a minimum level of oversight while not stifling technological development.

However, in the UK, we also need recognition of the importance of global collaboration and that a 'laissez faire' approach to regulation won’t instil the trust needed for widespread adoption. Practical UK regulations can be rolled out now, giving businesses the clarity they urgently seek. For example, similar to the EU AI Act, the need for transparency can be addressed by introducing requirements for companies to be clear when using generative AI, such as when customers are communicating with a machine rather than a human, or providing a detailed audit trail and citations indicating where the information used to produce a result was derived.

Setting regulations for such a complex sphere as AI is a formidable challenge, however, any lag in introducing regulation could hamper the role that AI can play in improving the competitiveness of the UK’s world leading legal profession.

Despite the major differences that already exist between various jurisdictions, a common understanding of the risks posed by technology would be good starting point. This – combined with agreements to cooperate on AI legislation – would make a major difference.

Furthermore, the European Commission has floated the concept of a regulatory sandbox that would allow companies to test AI products. Done properly, a sandbox should enable innovation to progress at pace.

It may be tempting to take a wait-and-see approach, but UK regulators, lawmakers and industry need to work together now to be proactive and head off any potential issues. We also need to get comfortable with the concept of regulating amid a rapidly evolving landscape; one where we may not be able to see the whole picture just yet and need to be ready to correct course as necessary throughout the technology’s advancement.

We are already starting to see the benefits of AI in the form of driving productivity and increased job satisfaction in the legal profession, however we can achieve even more with thoughtful standard setting and practical guardrails now. These will help to expedite the delivery of the benefits of AI - not only for the legal profession but also for communities and broader society.

 

Kriti Sharma is chief product officer, LegalTech, at Thomson Reuters

Topics