The UK is currently the third-largest artificial intelligence market in the world. The government admits that AI could be the biggest lever to deliver its five missions under its Plan for Change, especially the pledge to kickstart economic growth. For legal professionals, it is crucial to keep an eye on what steps public bodies are taking to support business growth in the UK.
Safety is central to the success of implementing AI, and the UK is at the forefront of AI safety and cybersecurity initiatives. In April 2023, the UK was the first country to establish an AI safety taskforce, the AI Safety Institute (AISI). This initiative gained momentum when prime minister Rishi Sunak secured commitments from OpenAI, DeepMind and Anthropic to provide pre-release access to their frontier AI models for safety evaluation. There are now similar organisations emerging in other countries.
In March, the UK launched another world-first initiative, the Challenge Fund, led by the AISI. Researchers who cover AI security threats can apply for funding of up to £200,000 per project to address critical AI security challenges, primarily cyber-attacks and AI misuse. The total funding pool is £8.5m. By tackling security risks head-on, the government aims to boost public trust in AI, helping to remove barriers for those looking to adopt the technology to drive growth.
The Challenge Fund is consistent with the objectives of the AI Opportunities Action Plan, the UK’s national AI plan. This was published in January and described by the prime minister as ‘a plan to make our country an AI superpower’. The UK is among 34 countries with a national AI plan. By the government’s own admission, the plan is ambitious. Cloaked in political language, it includes goals to increase investment in world-class computing, data infrastructure and talent; to promote cross-adoption of AI between the public and private sectors; and to encourage the growth of UK national AI businesses – the UK must be an AI maker, not an AI taker. The plan includes specific targets, but there is no explanation of how those targets have been formulated.
The UK is also a co-founder of the International Coalition on Cyber Security Workforces, launched in January. Moreover, it has established a new AI Code of Practice to support directors and board members of UK businesses in overseeing cyber risk management.
The government describes the code as a ‘world-leading AI cybersecurity standard’ and claims it will equip organisations with ‘the tools they need to thrive in the age of AI’. There is also a Cyber Security and Resilience draft bill under consideration in parliament.
Promoting innovation, specifically in the legal sector, is part of the government’s plan for economic growth. The LawtechUK initiative – funded by the Ministry of Justice – is dedicated to driving digital innovation, including AI, in legal services.
The government’s goal of attracting AI investment is also reflected in its approach towards AI regulation. The government has consistently resisted statutory AI regulation, favouring instead an adaptable and principles-based approach. This position mirrors the US and is in stark contrast to the EU, whose AI Act has established a comprehensive regulatory regime, imposing strict legal obligations on AI developers and users.
However, the reintroduction of the Artificial Intelligence (Regulation) Bill [HL] (2025) in March, after it failed to progress into law before the dissolution of parliament ahead of the last general election, was an important development. While the AI Bill is a private member’s bill, it should not be dismissed as bound to fail. It proposes a statutory AI authority and codified principles which, if adopted, will be a significant change to the voluntary and sector-specific approach.
Against this backdrop of public sector support for AI development, how does the legal sector respond to the opportunities and challenges presented by the rapid development of AI tools?
The pace of integrating AI technologies in the legal sector is inevitably increasing. For commercial disputes, AI tools can assist with many tasks, such as research, drafting, summarising and transcribing. If used correctly, AI will enhance efficiency. There is ongoing debate about the extent to which AI will change the long-established legal services business model.
The increased integration of AI in litigation is reflected in the evolution of AI guidance for the judiciary, which was originally published in December 2023 and revised in April. The revisions mainly consist of the introduction of additional AI-related terminology and Microsoft’s Copilot Chat, a secure AI tool available to judges through eJudiciary.
In international arbitration, there is evidence that the AI-related soft law issued by arbitral institutions is starting to evolve from the principle-based approach (such as in AI guidelines published by the Silicon Valley Arbitration and Mediation Center and the Stockholm Chamber of Commerce in 2024) to more detailed guidance (such as in AI guidelines published by the Chartered Institute of Arbitrators in March).
Broadly speaking, the guidelines establish a framework that supports informed decision-making and assists practical efforts to mitigate some of the risks to the integrity of the arbitration process, parties’ procedural rights, and the enforceability of any ensuing award or settlement agreement. More transparent and collaborative thought leadership across the industry is needed to help develop market-wide understanding and hopefully consensus on what challenges and opportunities lie ahead for practitioners.
Natalia Chumak is a partner at Signature Litigation, London
No comments yet