Protests are being staged outside artificial intelligence offices in San Francisco (Anthropic) and London (Google DeepMind), asking the companies to stop the race to develop ever more powerful AI, because of the risk it poses to humanity.
At a time when AI dominates many discussions, and when law firms – and the Law Society – are gearing up to deal with the overwhelming flood of its consequences, these protests may seem like King Canute and the waves. (I stand in spirit with the protesters, but I am not prepared to go on a hunger strike with them.)
A similar metaphor of powerlessness may be used for the UK’s own position in relation to AI.
Last week, during president Trump’s visit, a ‘Tech Prosperity Deal’ was signed, seeing investment of £150bn by major US tech companies in the UK. This investment has mainly been cheered, but questions remain whether it is good news overall for the UK (jobs, support for our tech industries’ development) or bad (long-lasting dependence on US infrastructure, heavy use of water and energy). Does our government even have a choice? We are in the dark and someone has to decide which way to go forward.
The same applies to the regulation of AI. We are caught in the middle between the EU, which is seen by some as having killed its chances to be an AI superpower through its over-regulating AI Act, and the ‘wild west’ of the US, where the government positively does not want AI to be regulated so as to give the country a commercial advantage.
Which way will we go? The government now plans a comprehensive AI bill in the next parliamentary session. But is there a middle way which will keep us safe and prosperous? Is safety more important than prosperity? Will the AI development bubble have burst by then?
These are the huge and imponderable questions crashing over us, and which we are as powerless to stop as King Canute was with his waves.
We as lawyers have no choice but to adopt AI (or the machines will give advice and complete transactions without us, under the guidance of non-lawyers). But we do have choices in some of the other questions.
For instance, the Law Society will have to decide on its approach to the government’s new AI bill when it is published. On behalf of the profession, are we for or against the regulation of AI, or neutral? (Of course, that is an empty question, since it depends on what kind of regulation; but the general direction of our response should be discussed early.)
The EU AI Act proceeds on the level of risk posed. Yet the possibility of machines delivering legal services without insurance or qualification was not even considered as a risk under the EU Act. Is this something which we should be lobbying on domestically now (since it is already happening)? There are tangled questions around liability, remedy and consumer protection on which our views are important.
The government has also said that the new AI bill will cover the fraught question of copyright; in other words, whether the training of large language models should be permitted to continue to gobble up our material without our consent, including jurisprudential and legislative analysis on our law firm websites.
When this was considered during the passage of what is now the Data Use and Access Act (DUAA) 2025, the Law Society had a nuanced position. After the passage of the act, the Society said that more clarity was needed around how AI and other tech companies can use copyrighted content without permission.
But should the Law Society wholeheartedly get behind the creative industries, led by Elton John, to oppose altogether the use of original material in AI training when there is no consent? Is it in our interest – and more importantly, the public interest – that legal work is gobbled up like this, if it will be used to issue advice without any of the guardrails that a regulated profession provides?
(The Law Society last week published its guidance on the DUAA, with many helpful pointers about how the new act affects day-to-day practice in areas such as conveyancing, family and crime (tinyurl.com/mw2fs6zz)).
The questions posed by the race to develop more and more powerful AI are coming at us thick and fast, and cannot be avoided. They are so difficult, and imbued with so much risk – for instance, in threats to skills and access to justice – that it is tempting to wish it would all go away.
But it won’t go away (although I privately wish the protesters outside Anthropic and Google DeepMind every success).
No comments yet