Last week the government announced that Jonathan Fisher KC has begun work on Part Two of his Independent Review of Disclosure and Fraud Offences. The first independent review of fraud legislation since 1986 has been set up to consider the challenges of investigating and prosecuting fraud cases in addition to looking at the operation of the disclosure regime in a digital age. This marks a pivotal moment for the legal sector as it grapples with the complexities of digital-era crime.
The nature and scale of fraud have evolved considerably over the past 40 years, accounting for over 40% of all offences recorded in England and Wales. It is perhaps no coincidence that this review also coincides with the introduction of the new failure to prevent fraud offence and highlights how important it is for large businesses/corporates to consider their policies and procedures.
Part One, Disclosure in the Digital Age, covered a history of disclosure and the right to a fair trial, as well as the disclosure regime legislative framework. A key recommendation is the integration of artificial intelligence to streamline the disclosure process. Currently, the manual review of digital materials is resource-intensive, with the Serious Fraud Office allocating 25% of its 2023 budget to disclosure obligations. By employing AI, the legal system aims to expedite evidence analysis, reduce backlogs and allocate resources more efficiently. However, this adoption also introduces challenges.
AI is playing an increasingly significant role in UK case preparation and evidence disclosure in both criminal and civil proceedings. However, as AI-driven tools become more sophisticated, a question remains: how do we harness the power of AI while ensuring compliance with established legal frameworks and principles of justice?
In February, solicitor general Lucy Rigby confirmed that the SFO had successfully trialled the use of technology-assisted review (TAR), utilising AI, on a live criminal case, and will continue to do so in future cases. TAR has been recognised as a powerful tool for automating the document review process; the advent of AI usage alongside it can offer broader capabilities, including pattern recognition, predictive analysis, multimedia analysis and real-time data processing.
In criminal investigations, combining AI with TAR could provide a more holistic and efficient approach, leveraging the strengths of both technologies to uncover insights faster and with more accuracy, which could be of significant benefit to both the prosecution and the defence. The opportunities here are clear. With adequate training and user knowledge around AI strengths and limitations, deploying such solutions can:
1. Increase efficiency in sifting through complex digital evidence resulting in considerable time and cost savings;
2. Reduce the possibility of human error in identifying disclosable material; and
3. Improve resource allocation, allowing legal professionals to focus on strategic case analysis.
However, the risks and challenges presented by the implementation of AI during the disclosure process must not be ignored. Some of these include:
1. While AI can assist, it cannot replace legal judgement. Prosecutors maintain personal accountability for disclosure decisions under their watch.
2. AI may struggle with contextual nuances, leading to unfair or incomplete disclosures increasing the risk of bias and error.
3. All AI-driven decision-making must be underpinned by transparency and accountability. Processes will need to be explained and open to scrutiny to guarantee the right to a fair trial.
4. Allowing AI to handle sensitive material increases the cybersecurity risk through the likes of hacking, corruption of data, or other malicious cyber activity.
5. The use of AI may also lead to intellectual copyright infringements.
The scope for procedural and legal challenges due to AI disclosure failures cannot be overlooked. Serious questions should be asked about procedural fairness and legal accountability when relevant obligations are not met. After all, the following grounds for procedural challenges may arise:
1. If AI-assisted disclosure results in material that could assist the defence being overlooked, there could be grounds for appeal or case dismissal.
2. AI-driven errors that compromise disclosure could undermine the defendant’s rights to a fair trial under Article 6 of the European Convention on Human Rights.
3. Defence teams may challenge AI-driven disclosure failures through judicial review, particularly where transparency in AI decision-making is lacking.
4. Under civil procedure, parties could argue that the use of AI was not a reasonable and/or proportionate method for disclosure if it led to key documents being overlooked.
To mitigate such risks and ensure compliance, the following should be considered:
1. AI-assisted processes should always be subject to human verification to minimise errors and bias.
2. Agreement between the parties regarding the use of AI could overcome unclear regulations, prevent procedural challenges and protect the integrity of proceedings.
3. AI tools used should be capable of generating detailed audit trails to facilitate review and challenge if necessary.
4. Those using AI should have sufficient training to better understand and mitigate potential risks.
While AI is growing in legal practice and disclosure, it remains largely untested and unendorsed. Unlike TAR, AI is subject to several challenges including transparency, validation and accountability. Those using it for disclosure purposes should ensure that all risks are mitigated as much as possible.
Lisa McKinnon-Lower is a partner at Spencer West, London
No comments yet