A Canadian province has become one of the first jurisdictions in the world to require lawyers to tell the court if and how they have used artificial intelligence. The new practice direction by the Court of King’s Bench of Manitoba on the use of AI in court submissions comes into effect immediately.

The PD states there are ‘legitimate concerns about the reliability and accuracy of the information generated from the use of artificial intelligence’. It continues: ‘To address these concerns, when artificial intelligence has been used in the preparation of materials filed with the court, the materials must indicate how artificial intelligence was used.’

The direction was signed by chief justice Glenn D. Joyal to cover the province with around 2,000 lawyers mostly based in the capital Winnipeg.

Local associate Simon Q.K. Garfinkel said the practice direction was a proactive step towards addressing the challenges posed by generative AI in the legal field by promoting transparency in the court. ‘Additionally, the direction sets a precedent for other jurisdictions to follow, encouraging similar initiatives to regulate AI use in legal proceedings,’ he added.

The intervention of the Manitoba judiciary will be closely watched by other jurisdictions grappling with how to ensure transparency and fairness in the courts.

Last week in New York, a US judge imposed sanctions on two lawyers who submitted a legal brief with six fictitious citations generated by the 'large language model' tool ChatGPT. The lawyers and their firm were ordered to pay $5,000 (around £3,900) in total after being found to have made ‘acts of conscious avoidance and false and misleading statements to the court’.

A similar episode has already happened in England, where a litigant in person presented case citations in Manchester County Court which were generated by ChatGPT but which were fictitious. On that occasion, the judge did nothing more than disallow the submissions and unofficially reprimand the litigant.

Rule 57AD of the civil procedure rules, covering disclosure, states that the court may give directions on the use of software or analytical tools, including technology-assisted review software. Parties must also agree on the use of software. ‘Technology Assisted Review’ includes all forms of document review that may be undertaken or assisted by the use of technology, including but not limited to predictive coding and computer assisted review.

In an article for the Gazette last week, partner Rosie Wild and associate Anna-Rose Davies, from London firm Cooke, Young & Keidan, said that currently ChatGPT does not possess the capacity to explain its predictions, making it unlikely to meet the current transparency requirements.

They added: ‘If these challenges can be successfully addressed, it is likely that the court would be receptive to considering the use of generative AI like ChatGPT. Should its use be approved, new issues would inevitably arise, such as the impact on procedural timelines and the extent to which cost recoverability for disclosure would be permitted when one party has access to the platform while the other does not.’

 

This article is now closed for comment.