Artificial intelligence is now firmly embedded in how businesses make decisions, and for many SMEs and company founders, AI-generated legal advice and DIY templates feel like a sensible response to sustained pressure. Tools that promise instant, low-cost legal answers feel practical, efficient and commercially sensible.

In reality, however, we are already seeing the fallout. Increasingly, disputes we handle can be traced back to businesses relying on AI-generated advice or running legal issues themselves without proper professional oversight. The problem is rarely that the advice is obviously wrong, more often it is 'almost right'. It looks polished, sounds confident and appears to deal with the issues, but it is fundamentally misaligned with how the business operates, how disputes actually unfold or where risk truly sits. That false confidence is often more expensive than no advice at all.
As dispute lawyers, we do not see problems at the point advice is taken. We see them when positions have hardened, relationships have deteriorated, and outcomes are at stake. That is where the limitations of AI become stark.
Legal issues do not exist in isolation - they sit within commercial relationships, governance structures and human behaviour. AI is not built to understand bargaining power, internal dynamics or how others are likely to react if something goes wrong.
In disputes, what matters is not whether an argument can be made, but whether it’s the right argument to run, and AI is particularly poor at that distinction. It can often throw out every conceivable point without differentiating between decisive action and mere noise. In live litigation this can be dangerous, as poorly prioritised arguments can dilute credibility, distract from core issues and actively weaken a case.
By the time a dispute reaches our desk, the early warning signs are usually clear. Payment slippage, scope creep, trails of 'just confirming…' emails, informal variations that were never documented. A shift from collaborative problem-solving to blame.
One of the biggest risks of relying on AI or DIY advice is delay, and using AI without professional oversight creates a false sense of security. Businesses assume the issue will 'blow over', worry that legal input will inflame matters or simply want to avoid cost. The irony is that early advice usually reduces cost because it preserves options, evidence and leverage.
Read more
Delay narrows options. Evidence degrades. Positions harden. Leverage is lost. AI does not understand emotion, reputation or how quickly disputes can spiral once frustration replaces strategy. It cannot assess how a dispute might affect banking covenants, key suppliers, major clients or future transactions.
By the time lawyers are involved, the question is often no longer how to shape the outcome, but how much damage can still be contained.
Another critical limitation is jurisdiction, as many widely used AI tools are trained on broad, international data sets. They do not reliably distinguish between legal systems, procedural rules or judicial approaches. UK litigation is highly specific in its procedural expectations, evidential thresholds and costs consequences. Advice that might sound plausible in one jurisdiction can be actively harmful in another.
Closed AI systems trained on jurisdiction-specific material may offer greater accuracy, but they are expensive and inaccessible to most SMEs. Open AI tools, while vastly available, are far more prone to jurisdictional error. Businesses using them often do not realise this gap until a court, insurer or opponent points it out, by which time the damage is done.
It’s important to emphasise that none of this is an argument against AI itself, and when used properly, it is a valuable tool that can assist with first drafts, issue spotting, summaries and document organisation. Used in that way, it supports decision-making.
The risk arises when AI replaces judgment rather than supporting it. AI cannot assess risk appetite or develop negotiation strategy. It cannot understand sector-specific realities or where leverage will sit if a deal starts to unravel. It can draft words, but it cannot judge how those words will perform in a live commercial dispute.
Legal judgment is not about legal theory, it’s about outcomes. The stakes are not just the wording of a clause, but enforceability, evidence, leverage, reputation and the knock-on effects across the business. 'Commercial' does not simply mean the pounds and pennies in the dispute at hand, it includes management distraction, staff confidence and future opportunity.
Fixing problems mid-dispute is disproportionately expensive as evidence may be incomplete, communications may already be unhelpful and relationships may be damaged beyond repair. Early legal input preserves options, protects evidence and often prevents disputes from escalating at all.
The real cost of disputes is rarely the legal fees. It’s distraction, uncertainty and damaged relationships. Businesses that manage risk well understand the difference between controlled and uncontrolled risk. Controlled risk is identified, documented, allocated and reviewed. Uncontrolled risk is informal, undocumented and reactive. That is where AI-generated advice often leaves businesses exposed.
AI-generated legal advice can be a useful starting point, but relying on it to run disputes or litigation entirely is a significant risk. If an issue can affect cashflow, control or reputation, it requires experienced, jurisdiction-specific legal judgment.
This is not about perfection, it’s about avoiding avoidable exposure. For businesses looking to future-proof themselves, recognising the limits of AI - and the continuing importance of legal judgment - is no longer optional.
Satish Jakhu is managing director and head of dispute resolution and litigation at RLK Solicitors























2 Readers' comments