New Zealand Law Society’s recent weekly update of news had a story which should concern us all. The Law Society’s library has been receiving requests from lawyer members for cases cited by ChatGPT, the much-discussed AI chatbot. The cases are packaged to look like real cases, with proper citations - the technology has learned what case names and citations look like. The only problem is: the cases don’t exist.

Jonathan Goldsmith

Jonathan Goldsmith

It is called hallucination when AI invents facts with total confidence. The outputs sound plausible, but are utter inventions. It seems that the current forms of AI chatbot, so over-hyped, are much given - like 60s hippies tripping out - to frequent bouts of hallucination.

Examples crop up in the news all the time now.

For instance, one UK broadsheet newspaper reports that it is beginning to receive requests for archived material that it cannot provide – because the articles, cited by ChatGPT, do not exist.

Or there is the case of the US law professor who was falsely accused of sexually harassing students by a chatbot, based on a newspaper article which did not exist (never mind that he did not teach at the university cited, nor had ever been on the trip where the misconduct was alleged to have taken place).

There are various reasons, apparently, for the hallucinations: how the AI is trained, the data to which it has access, problems with encoding, or biases. But that doesn’t matter to us lawyers – that is for AI engineers to resolve. We need to know only that it happens, and on an increasingly frequent scale. (I have personal experience, when in answer to a question I posed to ChatGPT, it confidently cited research studies which did not exist.)

Regulation is coming. The Italian data protection authority last week put a temporary ban on ChatGPT because of potential infringement of EU data privacy laws. Complaints are arising in other EU member states, too, and there is talk of action at EU level. AI companies need a legal basis to collect and use personal data, must be transparent about how they use the data, plus keep personal data accurate and give people a right to correction. With so much hallucination, how will that work?

The UK government has come up with its own approach to AI regulation, more by way of encouraging innovation than clamping down on misuse. There will be no single AI regulator, nor any new laws. For how long such a light touch will last, particularly given stricter measures being taken in other countries, including China and the US, is another matter.

In other words, meaningful regulation at the level at which it will touch on day-to-day use by lawyers is in the future. In the meanwhile, chatbots are hallucinating and confidently pumping out false information all the time.

This is a case, in my view, where lawyers’ professional bodies should take notice.

That is because it is inevitable that if such machines exist, lawyers – like everyone else – will use them. Some law firms have already put up material so that clients know the difference between using ChatGPT and using a real lawyer.

An early taste of what guidance for lawyers might look like was published last week by US academics from the University of Minnesota Law School.

It seems from their paper that we have been all wrong in our approach to ChatGPT. We have been lulled by search engines to type in a single question, and then expect reams of options which we can investigate. The modern AI chatbots don’t work like that. They come up with a single answer.

If the new machine were perfect, it would ask you further clarifying questions, as to what you meant by your first question. But instead the model is based on an expectation that you will enter into a dialogue with it, following up general questions with more specific ones, and so challenging (and teaching) it. You should first prompt the machine with more and more specific questions, and then personally verify its answers by asking it to quote specifically from cases that you have provided it, rather like you would do with a summer intern. That’s right, it needs mentoring and supervision. The paper provides extensive examples of the kinds of questions you should ask.

It is true, as all the hype says, that if lawyers don’t become acquainted with, and then use, the intelligent chatbots which are now available, we will fall behind in terms of providing an efficient and high quality offer. Others will use it and overtake us.

But it is also true that the chatbots are not just slot-machines. They need a lawyer carefully to oversee their legal output. They are inhabited by hippies high as the sky, who sometimes press out wise answers, but sometimes mere hallucinations. 

Will the Law Society consider guidance?

 

Jonathan Goldsmith is Law Society Council member for EU & International, chair of the Law Society’s Policy & Regulatory Affairs Committee and a member of its board. All views expressed are personal and are not made in his capacity as a Law Society Council member, nor on behalf of the Law Society

 

This article is now closed for comment.

Topics