It sounds dull to announce that the SRA is consulting on its Business Plan and Budget 2023-24. But lurking unseen in the bureaucratic verbiage is a firework.

Jonathan Goldsmith

Jonathan Goldsmith

The hidden rocket lies, fittingly, under ‘innovation and technology’. The SRA likes to see itself as a leader in the field, pulling a reluctant profession to the promised land of tech whizz-kiddery. So a regulatory sandbox (compulsory for tech-cred) is dangled before us, and of course plenty of innovation.

All this is for the profession: the profession must change and become more modern, we are behind the times. But one thing that the SRA does not mention is the potential impact of technology on itself. If it did look at that aspect, it might feel less superior towards us.

With the arrival of sophisticated AI, we are now in the realm of machines being able to provide legal services without human intervention. Even more dangerous for domestic regulation, we are in the realm of machines being able to provide legal services cross-border, where the machine is located abroad, mostly out of reach of UK law and regulation. Of course, legal advice can be given by anyone in England and Wales, whether by a human or foreign computer. But the reserved activities are something else.

There is nothing magical about the reserved activities – apart from in-person appearances in court - which makes them beyond the wit of AI. If machines from outside the UK provide reserved activities into the UK, what can the SRA or the government do about it? And if that is allowed to happen without regulation, then why are the rest of us solicitors being regulated at all, since our competitors are given a free pass, so undermining the point?

This is not an end-of-lawyer article, but an end-of-regulation question. I buy the argument that there will always be a need for lawyers, regardless of the sophistication of machines. But who will regulate the machines outside our jurisdiction which will be able to do important parts of our work in certain reserved activities?

Not surprisingly, the regulation of AI is a hot question. The UK government has been proud of its light touch approach to regulation (principles-based, adaptive), as laid out in its white paper, ‘A pro-innovation approach to AI regulation’. This has been contrasted to the EU’s more centralised, legislative approach in its proposed EU AI Act.

However, given that the government, following the prime minister’s recent visit to Washington DC, now wants to seize the initiative and lead the way in global AI regulation, some have noted a quick shift in the UK government’s stance with a bias towards more regulation. If that is the way the world is going – including the EU and China, but not the US – then the UK needs to maintain credibility.

Even in the EU’s more regulation-based approach, legal services do not feature. We are too low down the pecking order. The future is impossible to predict, but that could mean that the general activities of machines outside the jurisdiction providing services into the UK are not regulated by a lawyer-specific regulator like the SRA, but by another more generalist body which regulates all AI services.

Why not the SRA? - mainly, because it does not have the powers on its own to deal with gigantic foreign tech companies. Only governments and their appropriate delegated bodies have the clout (see Italy’s recent banning of ChatGPT as an example). This is one of the flaws in our own government’s current proposals to leave AI regulation to existing regulators. Of course, the government could give the SRA gigantic powers, but then multiple regulators in professional and other fields would be trying to regulate the same thing.

The question is: is there something specific about legal services provided by foreign AI which demands its own regulator, or is it no different to the risks which come from medical or financial or architectural services being so provided? If there are differences – and that seems likely – does that need a separate regulator, or merely a separate unit within an overall regulator? And, again, if machine risks to legal services are regulated in one way, why should human solicitors be regulated differently by a separate regulator?

These are huge questions, which are existential for the SRA. Being so huge, the answers lie with the government. And it is a reasonable expectation that the SRA should already be considering them.

I hope, therefore, that the SRA begins the debate. The consultation period on its business plan ends on 21 June 2023. The government’s consultation on AI regulation ends on the same day - a justified coincidence, when the two are so linked for legal services.

Commentators have said that AI will transform the landscape out of recognition. This may be one of the ways it does so.

 

Jonathan Goldsmith is Law Society Council member for EU & International, chair of the Law Society’s Policy & Regulatory Affairs Committee and a member of its board. All views expressed are personal and are not made in his capacity as a Law Society Council member, nor on behalf of the Law Society

Topics