John Grisham, the celebrated legal author, is among 17 famous writers suing OpenAI for ‘systematic theft on a mass scale’, the latest in a wave of lawsuits by authors concerned that AI programs are using their copyrighted works without permission.

Jonathan Goldsmith

Jonathan Goldsmith

If you think this doesn’t affect us, ask ChatGPT (as I did) to tell the story of Snow White in the style of the law firm, Linklaters. Out pours a script like this:

‘The Applicant, Snow White, is a person of interest in a property dispute that centers around the ownership of a certain residential property located within the Kingdom of Enchanted Woods. The said property, colloquially known as the "Cottage," is inhabited by the Applicant and seven individuals known as the "Dwarfs."’

Haha very funny - until you come to the bottom where text like this appears:

‘Linklaters LLP is committed to ensuring the protection of our client's property rights and interests throughout the legal process. We will diligently investigate the matter, explore all possible avenues for resolution, and, if necessary, zealously advocate for Snow White's rights in a court of law.’

Disney's Snow White

Disney's Snow White, 1937

Source: Disney/Alamy

When people are able to ask ChatGPT to give legal advice in the style of a particular law firm, we have big problems.

Who is doing anything about this, apart from the victims forced to sue? The answer is dismal. Of the leading economies, only the EU and China have made any progress in regulation.

China issued a set of temporary measures effective only from last month to manage the generative AI industry, requiring service providers to submit security assessments and receive clearance before releasing mass-market AI products.

The EU is nearing the end of its marathon effort for a rules-based regulation of AI. Its problem is that its regulatory proposals – known as the EU AI Act – were launched back in April 2021, which is the Middle Ages when it comes to AI development. Since then, over a year after those initial proposals, ChatGPT and the other generative AI models were launched, which have changed everything.

The UK is still in discussion phase. The government had been in favour of a light touch approach, but then our prime minister expressed the desire to be a global leader in AI security. Now the UK will host an AI Safety Summit this November. Now, too, the deputy prime minister, in his speech to the UN last week, admitted that global regulation is falling behind current advances, that the big tech companies have country-sized influence … and that something needs to be done quickly and multilaterally. At last!

But so far as UK lawyers are concerned, the EU AI Act is the only game in town. It is currently in trilogue discussions between the three institutions (Commission, Council and Parliament), and it is hoped that it will be passed by the end of this year, or at least before the European Parliament elections and the start of the new Commission next year.

So how does it affect lawyers? Hardly at all: we are too far down the food chain for the regulators to care. But I think that that is a mistake. (This assessment is made on the basis of currently known texts: the outcome of the trilogue is unknown and might be different.)

This article is also too short to examine the whole of the AI Act, but essentially it deals with sectors and activities according to risk. Those actions which pose an unacceptable risk are prohibited altogether, such as, generally, remote biometric identification in law enforcement, and social scoring by public authorities.

Then there is high risk, which requires compliance with AI Act requirements and an ex-ante conformity assessment.

That is where we should come in, but don’t. The original Commission text reads that high risk activity includes the following heading – ‘Administration of justice and democratic processes’ - with the following text:

‘AI systems intended to assist a judicial authority in researching and interpreting facts and the law and in applying the law to a concrete set of facts.’

This has been tinkered with by the Council and the Parliament in their proposed amendments, for instance to add ADR and activities equivalent to the judicial. We can all agree to those being high risk. The question is whether that is all that should fall under this particular category.

What is omitted is any understanding that lawyers are the first port of call for a client. I have quoted before that the courts have a higher profile, but ‘law is made and applied through lawyer counselling and planning and often this ‘private’ law has public impacts as great as any ruling of a high court’.

To consider society at high risk of the misuse of AI in the judicial sphere, but not when it comes to lawyers’ advice and transactions, is a mistake. The impersonation of lawyers by machines, and the giving of legal advice by machines, unregulated by anyone – since, as the deputy prime minister admitted, only countries can regulate the big tech companies – should be seen as an equivalent high risk. My example of Linklaters should give us pause.

Our professional organisations should start lobbying about this public risk immediately, even in countries like the UK which are still considering their options on AI regulation.

 

Jonathan Goldsmith is Law Society Council member for EU & International, chair of the Law Society’s Policy & Regulatory Affairs Committee and a member of its board. All views expressed are personal and are not made in his capacity as a Law Society Council member, nor on behalf of the Law Society

Topics