There has been a blizzard of developments regarding AI. Every day there are more that lawyers in the sector need to master, and that the rest of us might fret about.

Here are a couple from the last few days, highlighting differences in approach between the EU and the US. Bizarrely, the two jurisdictions come at AI from different ends of the stick, yet are moving towards the same middle point, creating confusion as they go.
President Trump signed an executive order last week ‘eliminating state law obstruction of national artificial intelligence policy’ – which means bowing to AI industry lobbying to remove US state laws which try to regulate the use of AI, on the grounds that there should be a single federal regulation.
Yet there is no single federal regulation in place to substitute for the state laws.
This is similar to the AI and big tech companies using the US government to bully the EU against using its own regulations - a recreation of the lawless Wild West, using AI today against the rest of us, encouraged by a stock market rising and rising on investments in a cloudy future of US AI greatness.
Yet a single federal regulation, similar in concept (if not in content) to the EU AI Act, is promised.
On the other side of the Atlantic, a more specific EU approach was seen in the recent publication of the European Union’s Fundamental Rights Agency’s report on assessing high-risk AI (which does not include legal services). The Agency is doing its job: following the publication of the EU AI Act, it is assessing high risk areas against its mandate. Tl;dr – many in the field of high-risk AI systems do not know how to assess or mitigate the risks systematically.
But it is not the content of the report which interests me so much as the qualification in the press release, as follows: ‘Interviews for this report and its contents were finalised before the European Commission issued the Digital Omnibus proposal on 19 November 2025. The report’s findings do not directly address the Digital Omnibus proposal.’
Read more
And what is the Digital Omnibus proposal that they feel obliged to highlight? The omnibus has two parts, one related to GDPR and one to the EU AI Act, and, like other current omnibuses being launched by the EU, it aims to simplify, delay and repeal in order to make the EU more competitive. So the very AI Act that is the subject of the FRA report is likely to change very soon. The much trumpeted regulation will be loosened and cut.
There is one giant lesson for the profession from these two recent developments. The forces governing the technology which is elbowing its way into our lives, whether we like it or not, are beyond our control. The advance is buffeted and driven by geopolitical big power politics and economics, and by the hunger for power of a few individuals who are now backed by trillions of dollars in investment. What chance do a couple of hundred thousand solicitors in England and Wales have?
Yet we have one superpower which AI cannot take away. We are a trusted human source.
Yes, we have all accepted supermarket self-checkouts, ATMs and airport self-check-ins. Yes, our clients research their legal problems online. But almost no-one likes dealing with automated voices providing services on the phone; we do not like endless exhortations to press a number from a menu, when the menu doesn’t contains exactly our problem, nor do we like having to declare our problem to a machine which misunderstands either because we have used unexpected words or an unfamiliar accent.
We don’t trust – or at any rate I don’t trust - words which arrive on a screen, unless they come from a trusted source.
And that is what solicitors are: we are human, and we are a trusted source. I think that that is how we are going to have to sell ourselves in the future. We are used to the ‘trusted source’ part, since that has been the basis of our appeal for generations. The ‘human’ aspect needs stating because we are now pitted against machines. It is obvious and absurd to note that we have always been human, but we have not had to highlight it before, because there was no machine competition in sight.
There are now AI law firms which boast that you will be dealt with by computers.
However, given the choice, for anything but the most basic task (like buying milk at the supermarket or a simple paper claim), I guess that most people would prefer to deal with a human. That wish must be even stronger for legal issues which can have serious consequences. So, consult a human, not a machine!
Jonathan Goldsmith is Law Society Council member for EU & International, chair of the Law Society’s Policy & Regulatory Affairs Committee and a member of its board. All views expressed are personal and are not made in his capacity as a Law Society Council member, nor on behalf of the Law Society
























No comments yet