AI as co-counsel – risks and rewards

There are numerous issues for lawyers to be aware of before embracing generative AI.

Nick Abrahams

Nick Abrahams

1.    Confidentiality: the information input into AI systems is not necessarily confidential, posing a potential risk to lawyer-client privilege. Lawyers need to be wary about the type of information they input into these AI tools to ensure they maintain professional obligations regarding confidentiality. Additionally, lawyers should be aware of the privacy rights of any person whose details are inserted in the AI system.

2.    Hallucinations: generative AI systems are known to generate ‘hallucinations’, which are inaccuracies or fabrications in the generated content. This could lead to the production of false or misleading information, which, if not adequately reviewed and corrected, could have significant legal consequences.

3.    Reliance and oversight: the convenience and efficiency of generative AI may lead to over-reliance on the technology, potentially reducing human oversight. Lawyers must ensure they review and approve any AI-generated output.

4.     Bias: AI algorithms are susceptible to the risk of bias, as they are trained on existing datasets which may contain implicit biases. Biased outputs could impact the fairness and quality of legal advice or documents created.

5.    Accountability: with the use of AI in legal practice, the question of who is responsible when things go wrong becomes more complex. If a mistake occurs due to AI-generated advice or documents, determining liability could be challenging. Lawyers must consider these potential accountability issues when utilising generative AI.

Prompt engineering for lawyers

Prompt engineering is the process of crafting effective and precise instructions or queries to an AI system to obtain desired results. For lawyers using AI tools, learning and excelling at prompt engineering is crucial to maximise the benefits of AI. By employing various prompt engineering techniques, lawyers can efficiently utilise AI systems to generate accurate and relevant legal information, research, and documents.

1.     Detail: provide comprehensive and specific information when formulating prompts to AI. Include relevant details such as jurisdiction, legal principles, case citations, and any other necessary context. For example, instead of asking, ‘What are the legal requirements for a valid contract?’ a better prompt would be, ‘Under NSW law, what elements must be present for a contract to be enforceable? Please cite relevant case law’.

2.    Define AI’s role: clearly specify the AI’s role in the prompt to ensure accurate and tailored responses. For instance, you could start with a prompt like, ‘As a brilliant legal research assistant, please provide an analysis of recent court decisions regarding privacy rights in the digital age’.

3.    What voice: indicate the desired tone or voice for the AI-generated content. Whether it should be authoritative, persuasive, technical, or even mimic a specific individual (such as Steve Jobs). Specifying the voice will help align the AI’s response with the intended purpose. For example, you might request, ‘Please draft a compelling submission in favour of stricter regulations on environmental law compliance, adopting the voice of a successful environmental lawyer’.

4.    Recipients and channel: specify the intended recipients and communication channel for the AI-generated content. This ensures the response is tailored to the appropriate audience and aligns with the chosen platform. For example, you could request, ‘Compose a concise summary of recent legal developments in intellectual property law for a LinkedIn post targeting fellow lawyers’.

5.    Sample answers: provide an example or template of the desired response structure to guide the AI’s output. This helps the AI understand the expected format and content organisation.

6.    Chaining – it remembers everything: leverage the AI’s ability to retain context by using chaining prompts. Start with broader enquiries and progressively narrow down the focus by asking follow-up questions. For instance, you could begin by asking, ‘Provide the outline for a 500-word article on cybersecurity laws, including sub-headings’. Then after receiving the answer, follow up with prompts such as, ‘Please provide five bullet points of ideas under each of the sub-headings you mentioned’.

7.    Ask to change specific points: if the AI-generated content requires modifications, request specific changes to individual points or sections. This enables you to fine-tune the response according to your requirements. For example, you could ask, ‘Please revise the second paragraph to emphasise the potential financial consequences of default’.

8.    Formatting options: take advantage of the AI’s capabilities by using formatting options such as headings, bold, italics, numbered lists, tables and even emojis. Direct the AI to structure and format the content as desired. For example, you could instruct, ‘Please draft a LinkedIn post on this case using emojis’.

By employing these prompt engineering techniques, lawyers can effectively harness AI systems to generate legal research, documents, and arguments tailored to their specific needs, thereby enhancing their efficiency and productivity. But be careful as the content is often wrong, so verify everything.

 

 

Nick Abrahams is global co-leader, digital transformation practice, at Norton Rose Fulbright, and co-founder of online legal service LawPath. Abrahams is a professor at Bond University, where he teaches The Breakthrough Lawyer, an online coaching programme designed to help lawyers grow as legal leaders and innovators that will be available to UK lawyers in 2024. This article is the second in a three-part series