The news regarding artificial intelligence is happening so fast that what is written today is out of date tomorrow.

Jonathan Goldsmith

Jonathan Goldsmith 

For instance, I wrote some weeks ago about the intra-institutional negotiations (‘trilogue’) to settle the final provisions of the EU AI Act, which will be a major player in regulating AI globally. It has to be finalised before the present term of the commission and parliament comes to an end next year. The draft act was launched before ChatGPT burst onto the world, and so the negotiators are trying to catch up.

It seemed that generative AI like ChatGPT (or ‘foundation models’ as they are also called) would be regulated, but then there was a shock development a few days ago. Large European countries like France, Germany and Italy have suddenly pushed against any kind of regulation for this type of AI. Why? Because they have their own tech companies, which are trying to compete globally. These companies fear the EU regulation might drag them down against US and Chinese competitors, and that regulation could also stifle innovation.

So, there you go – the type of AI which, in my view, poses the biggest threat to the rule of law as it applies to legal services might escape EU regulation altogether. But, then again, being AI which brings zig-zags every day, it might not, if there is a final compromise.

The recent and unexpected dismissal of Sam Altman as chief executive of ChatGPT’s owner raised similar issues. According to press reports at the time of writing, his dismissal exposed differences in the AI community between those who believe AI is the most important new technology since web browsers, and those who worry that its speed of development brings dangers.

Both these stories show that no-one really knows whether generative AI is overall a good or a bad thing, and whether regulating it will be beneficial.

What has this to do with lawyers? In my view, a lot. Since the debates are moving so quickly, we need to be sure that we have evaluated its impact on our own sector, so that we can shriek above the din to guide policy-makers. The first step is to decide ourselves whether generative AI is overall good or bad for legal services, or maybe good here but bad there. As a first step, we should reach agreement on what are the good things it can do, and what the bad.

For instance, it presumably has a great role in reducing unmet legal need by putting automated legal services (without any human intervention at its point of delivery) at the disposal of citizens who cannot afford a lawyer. On the other hand, the provision of legal services by machines poses threats (if not regulated) through the unsupervised delivery of cross-border negligent advice.

That is just a starter. AI can do a myriad of things, like document assembly, e-discovery, predictive analysis of cases, arbitration of disputes, even negotiation of contracts, as we saw recently. We need to begin work urgently on discussing all its uses and coming to an agreement about how they should be treated. Without that, we cannot contribute to the debate which will impact not only on our own future, but also the bigger question of the rule of law and the fair administration of justice.

It is the role of the bars to take on this task, and I am not sure that any are doing it quickly enough to keep pace with the decisions that will be made.

The American Bar Association has set up a task force on AI and law with a wide brief, including to address the impact of AI on the legal profession and the practice of law, and to identify ways to address AI risks. But it was created just three months ago, and so we will have to wait for its recommendations.

The Law Society is ahead of the field in many ways. It responded at a high level earlier this year to the government white paper on AI, repeating its message just ahead of the global AI safety summit hosted by the prime minister at the beginning of this month. So it has some worked-out principles that it can use in the coming debate.

Additionally, the Law Society is a pioneer in its launch this week of its first-ever guide to the profession on the use of generative AI, called ‘Generative AI – the essentials’. If words could flash bright lights, I would use them now, because this is really ground-breaking, probably the first in the world.

Next we need to work out our view urgently on the public interest impact of AI in all its varied uses in legal services, so that we can be heard in the debate about its regulation.


Jonathan Goldsmith is Law Society Council member for EU & International, chair of the Law Society’s Policy & Regulatory Affairs Committee and a member of its board. All views expressed are personal and are not made in his capacity as a Law Society Council member, nor on behalf of the Law Society