We are drowning in artificial intelligence stories: either gloom-and-doom, or AI saves-the-world. I am coming to believe it is part of an AI plot to render us comatose before it takes control.

Jonathan Goldsmith

Jonathan Goldsmith

There have been continuing stories specifically focusing on legal services, such as on the misuse of ChatGPT in court submissions, reported both in the Gazette (about a litigant in person) and the New York Times (about a lawyer).

Given its pitfalls, I keep wondering where is the bar guidance for lawyers on the ethical use of ChatGPT or other similar generative AI models? I have yet to find it (even if it is being thought about in academia).

As with so many topics, we are happy to tell governments and other authorities what they should do with AI. The Council of Bars and Law Societies of Europe (CCBE) provided us with just such an example a few days ago in its ‘Statement on the use of AI in the justice system and law enforcement’. It is an excellent statement calling on others to incorporate various principles into the use of AI, such as respect for human rights, transparency, accountability and upholding the rule of law. The CCBE also insists on the right to a human judge.

All well and good, but where is the CCBE’s guidance to its member bars on how lawyers should use the kind of AI which is getting some into trouble before the courts? Should that not come first? (The CCBE is an exemplary organisation, and its omission here is replicated by all similar lawyers’ bodies, so far as I can see.)

A robot's hand emerges from a laptop screen

Source: iStock

Some people say ‘Bars do not have the expertise to devise guidance’ (I certainly don’t) and ‘Bars don’t know how the technology will develop’. The answer in such cases is to borrow and adapt from others who do have the expertise, and who are prepared to draw a line in shifting sand. On that basis, I will take my own advice and not wait for others to come up with guidance. I will draw up my own as a model for the future.

My existing template comes from the European Commission, which certainly has access to expertise and will know as well as any other what the future holds. It has recently issued ‘Guidelines for staff on the use of online available generative artificial intelligence tools’. It is perfectly suitable for adaptation for lawyers’ use. (For ease of language, I will use the term ‘generative AI tools’, although some go under other names, such as large language models.)

First, the European Commission’s document cites the kind of generative AI tools to which it applies: ChatGPT, Dall-E, Midjourney, Bard AI, LaMDA, Aleph Alpha, Bloom and Stable Diffusion. In other words, we are not talking about AI models which some big law firms are developing for themselves, under their own control.

There is then a description of how the tools work, which is vital background for the rules that follow.

And here come my three lawyers’ rules, based on the document:

Never share any information with a generative AI tool that is confidential or privileged, nor any personal data. That is because any input to such a tool is transmitted to the provider, which can subsequently use it to generate future outputs that could be publicly disclosed to another enquirer. This is probably the most important guidance to be issued to lawyers.

Always critically assess any response provided by a generative AI tool for potential biases and factually inaccurate information. The input to such tools may be insufficient or biased, and their creators are not always entirely transparent about the data and algorithms used. Asking the chatbot itself whether it is accurate is no good; there needs to be independent verification. Lawyers responsible for the output of others within their firms will need to know whether any of the output was generated by such a tool and ensure its accuracy.

In particular, always critically assess whether the output from the AI tool violates intellectual property rights, and especially third party copyright. This arises again from not knowing what has been fed into a generative AI tool in the first place, and because current tools cannot properly list and credit materials reproduced in their output. This in turn makes it unwise for a lawyer to replicate directly the output provided in documents which go public, such as to a court. (The commission makes the latter sentence one of its rules – never copy directly into public texts, particularly not legally binding ones.)

The commission’s final rule is to tell staff that they should never rely on the tool for critical and time-sensitive processes, because of problems with stability and unavailability of services.

And there we have it: some opening guidance for lawyers on the use of generative AI tools, contained in three basic rules. I hope others develop them further. 

Jonathan Goldsmith is Law Society Council member for EU & International, chair of the Law Society’s Policy & Regulatory Affairs Committee and a member of its board. All views expressed are personal and are not made in his capacity as a Law Society Council member, nor on behalf of the Law Society

Topics