The prevailing narrative about artificial intelligence in legal practice is reassuringly simple. The machine assists, but the lawyer always remains in control. 

Akber Datoo

Akber Datoo

That formulation is comforting yet we must be alive to the fact that it is also incomplete.

If one reads the recent Mazur judgment carefully, and considers it alongside the courts’ increasingly direct interventions in cases involving AI misuse, a more complex picture emerges. It is not one of lawyers confidently directing intelligent systems, but of a profession at risk of occupying a structurally exposed position within them.

The analogy I would like to use is that of a fuse. A fuse does not direct the flow of current. It is designed to absorb failure when the system is under strain. 

This outcome is not inevitable. However, it is a real possibility if AI is introduced into legal workflows without a corresponding shift in how responsibility, authority and understanding are organised.

What happened in Mazur

In Mazur v Charles Russell Speechlys, the High Court considered who is legally entitled to conduct litigation, a reserved activity under the Legal Services Act 2007. A senior employee within a law firm, who was not an authorised solicitor and did not hold a practising certificate, had undertaken substantial elements of litigation work. It was argued that this was permissible because the individual operated within a regulated firm and under supervision. The High Court rejected that position, taking a strict view that supervision did not amount to authorisation and that only authorised individuals could conduct litigation.

The Court of Appeal took a less restrictive approach. It held that an unauthorised person may perform tasks within the scope of conduct of litigation for and on behalf of an authorised individual, provided the authorised individual retains responsibility for those tasks and puts in place appropriate arrangements for delegation, direction, management, supervision and control. The detail of what those arrangements require in practice is, importantly, for the regulators in England and Wales to spell out, within the wider framework overseen by the Legal Services Board. For solicitors and SRA-regulated firms, that means awaiting further practical guidance from the SRA.

That principle sits uneasily alongside the way legal work is currently evolving. Legal services are increasingly disaggregated. Drafting is modular. Research is accelerated. Document review is partially automated. Tasks are distributed across paralegals, trainees, associates and partners, legal operations teams, external providers and, increasingly, AI systems capable of generating plausible legal content at scale, alongside broader legal technology platforms that execute legal tasks across workflows.

This transformation is not something to be resisted. It represents a significant opportunity. Properly understood and implemented, AI can enhance legal reasoning, improve efficiency and enable more accessible and responsive services. However, the production of legal work is no longer a single, contained act of professional judgment. It is a system of interconnected processes.

The law, by contrast, continues to operate on a different assumption. It looks for a name on a document, a signature, a statement of truth. It looks for a person who can be held accountable. Disaggregation distributes labour. It does not distribute responsibility. The law permits delegation. It does not permit the loss of control.

This is the tension that Mazur brings into focus and that AI intensifies. As processes become more fragmented and more rapid, it becomes harder for any individual lawyer to exercise the kind of direct, continuous control that the legal framework presupposes. Yet the requirement to stand behind the work remains unchanged. Responsibility remains human, even as control becomes distributed.

This is why the current reliance on the phrase 'human in the loop' is often insufficient. The presence of a human reviewer is not, in itself, a meaningful safeguard unless they retain real control, supervision and responsibility for the work.

A robot holds justice scales in its hand

Source: iStock 

Mazur does not prohibit modern workflows, but it does not give them blanket approval either. It makes their lawfulness depend on responsibility remaining anchored to an authorised individual exercising genuine direction, management, supervision and control. The challenge for modern legal technology is that control becomes harder to locate as processes disaggregate and systems grow more complex.

AI adds a further question that Mazur could not itself answer. The Legal Services Act 2007 regulates who is entitled to carry on reserved legal activities, including the conduct of litigation, but it does not expressly map the point at which an AI system ceases to be a tool used by an authorised person and begins, in substance, to determine or execute part of the litigation process. That boundary matters. An authorised individual may still be named on the file and formally responsible, but if the system is selecting the next procedural step, generating the pleading, triggering filing or service, or shaping the litigation strategy with only nominal review, the question becomes whether that individual is genuinely conducting the litigation, or merely absorbing responsibility for a process driven elsewhere.

A lawyer who does not understand how an AI system produces its outputs, who cannot interrogate those outputs effectively, or who lacks the time and authority to challenge them, is not exercising professional judgment in any substantive sense. In such circumstances, the human role risks becoming symbolic rather than real.

The courts have already begun to articulate this distinction. Cases involving fabricated citations and other AI related errors have been treated as failures of professional responsibility. The origin of the error is irrelevant. What matters is that it was presented to the court under the authority of a legal professional. The obligation is not merely to supervise. It is to verify. This is not an argument for restraint in the adoption of AI. It is an argument for seriousness in its implementation.

The potential of AI in legal services is considerable. It can enhance analytical depth, reduce inefficiency and allow lawyers to focus their expertise where it adds the greatest value. However, realising that potential requires a shift in mindset across the entire legal ecosystem.

For solicitors and SRA-regulated firms, the SRA will have a key role in helping to translate the Court of Appeal’s judgment into practical guidance, particularly as the legal profession increasingly adopts AI and other digital tools. The judgment identified the need for proper direction, management, supervision and control, while leaving the detailed application of those concepts to the regulatory framework. The SRA has indicated that it is reviewing its guidance following the judgment and will update it where necessary. As that work develops, practitioners would benefit from practical examples of how supervision should operate in modern, technology-enabled workflows, including the level of review required, how AI-generated work should be checked, when sign-off may be needed, and what records should be retained. That practical clarity will help firms use new tools confidently while maintaining appropriate professional responsibility.

Courts must continue to uphold standards of accountability, while recognising the realities of modern legal production. Judges are not external observers of this transformation. They are an integral part of the legal system and have a critical role in shaping how these standards are interpreted and applied in an AI-enabled environment.

Clients, too, must engage with this transition. Many have trained within the environment of traditional law firm models and pricing structures, but must now also grapple with the operational and economic realities of AI-enabled delivery. That includes questions of data, including business data, and how it is used within AI systems to unlock value. It also requires a more sophisticated understanding of risk. If a system relies on a named professional as the ultimate point of accountability, that is not incidental. It is a deliberate allocation of liability. Clients should be clear when they are, in effect, asking for that professional to act as the fuse.

AI literacy is no longer optional. It is now a core component of professional competence and a prerequisite to the proper discharge of legal responsibility. Understanding how AI systems generate, structure and execute outputs, where they fail, and how to interrogate them is not a technical curiosity. It is the foundation of competent legal practice in an AI-enabled system.

You cannot meaningfully oversee a system you do not understand. And you cannot safely sign off on work you do not have time to interrogate. Oversight must be designed, not assumed. Without that understanding, it is difficult to say that the lawyer retains the direction and control required by law.

If legal workflows evolve without equivalent attention to how control and supervision are exercised, there is a risk that responsibility will continue to concentrate at the point of formal sign-off, even as meaningful control is exercised elsewhere. In that scenario, the lawyer becomes both the seal of approval and the point at which failure is absorbed. That is the role of a fuse.

However, AI also presents an opportunity to design something better. It allows the profession to reconsider how work is allocated, how outputs are validated and how responsibility is exercised in practice. The objective should not be to retain a human presence within the process as a matter of form. It should be to ensure that the human within that process has the capability, authority and time required to exercise genuine judgment.

Mazur serves as a reminder that the law will continue to attach responsibility to individuals, not systems. The law permits delegation, but only where direction, supervision and control remain real rather than nominal. It does not permit the abdication of responsibility. The task now is to ensure that our systems are constructed in a way that makes that responsibility meaningful.

If that is achieved, AI will strengthen legal practice.

If it is not, the system will still require a point at which failure is absorbed - and the profession may find that responsibility remains firmly attached, even where meaningful control has quietly shifted elsewhere.

 

Akber Datoo is co-chair of the Technology and Law Committee, Law Society council member and founder & CEO of D2 Legal Technology