Sometimes a word or phrase becomes prevalent simply because there is nothing else seemingly available to fill the void or fulfill the need for expressing the matter at hand.

Lance Eliot - Headshot Photo

Dr. Lance Eliot

One such example would be the nebulous and somewhat infamous reference to 'robo-lawyer' or also stated bluntly as a robot lawyer.

A glance online provides voluminous indications of articles and postings that relish using the robo-lawyer moniker as a headline-grabbing stunt. Admittedly, the usage tends to work and gets the attention of readers. In that sense, it is an effective trigger or communicative signal.

Plus, one might argue that there is nothing else that so succinctly conveys the same notion, namely the idea that Artificial Intelligence (AI) is gradually entering into the legal domain and increasingly becoming part-and-parcel of the latest legal tech offerings.

Unfortunately, the robo-lawyer notion carries a lot of baggage with it.

The innate implication of the vaunted phrase is that there exist instances of AI that are sentient or that have reached a level of capability equal to human intelligence.

Lawyers and other legal professionals become instinctively worried that they are on the verge of being replaced by such AI.

The public-at-large is led to believe that a legal chatbot can provide comparable legal advice as would a human lawyer.

And so on.

Let’s be clear about this and please know that there is no such AI today that can analyse the law and proffer legal advice in anything akin to that of a flesh-and-blood attorney. Sure, AI can demonstrably aid a lawyer by piecing together contract passages when drafting a new contract or conduct a quick search across a corpus of court cases to find ones that are salient and notable for a new case being pursued, but none of this is the same as exhibiting the robust and cognitive prowess embodied in humanoid lawyering expertise.

Over time, the AI is getting better and being improved by advances in Natural Language Processing (NLP), Machine Learning (ML), Knowledge-Based Systems (KBS), and so on, though the possibility of sentience is a farfetched dream and likewise so is the use of an AI-based robot lawyer that autonomously dispenses bona fide legal advice.

For now, we need to keep a balanced perspective that AI does have merit, and will additively boost those that practice law, and yet realise too that AI has a long way to go before it gains substantive autonomy.

In the meantime, what shall we do about the handy robo-lawyer naming that regrettably overstates and misleads in both tone and substance?

A sounder approach consists of referring to levels of autonomy and as specifically geared toward the field of AI and law.

Based on my research, there are seven levels of autonomy that can be used to appropriately designate the application of automation and AI as it applies to legal discourse and reasoning:

Level 0:   No legal automation

Level 1:   Simple Automation for legal assistance

Level 2:   Advanced Automation for legal assistance

Level 3:   Semi-Autonomous Automation for legal assistance

Level 4:   Domain Autonomous legal advisor

Level 5:   Fully Autonomous legal advisor

Level 6:   Superhuman Autonomous legal advisor

Those of you familiar with self-driving cars are likely aware that a worldwide standard exists for referring to levels of autonomy in the case of cars, trucks, and other ground-based vehicles, and uses a similar format or convention to depict the various levels of autonomy.

Having such a systematic means to refer to self-driving is handy since it can readily denote whether a given AI-based driving system is merely aiding a human driver or intended to replace a human driver. This also allows for catching those that seem to exaggerate or stretch what they offer as an AI driver by asking them which level of autonomy their vehicle has actually achieved.

We can do the same in the field of law.

For example, a new feature added into an e-discovery software package might make use of NLP, allowing for queries to be made by using everyday conversational language rather than having to use an arcane and highly tech-oriented set of commands.

Is this added capability of NLP able to unabashedly be touted as a robo-lawyer?

A vendor might certainly hope so, and due to there not being any particular restriction or global definition regarding the meaning of robo-lawyer, this kind of grandstanding can easily be undertaken.

If the NLP were to be compared to the levels of AI and law autonomy, presumably it would come into the Level 2 category, in which case, we would all immediately know that the NLP has not leaped into the autonomous realm as yet.

After a while, once the levels become familiar and commonly referenced, the succinctness of referring to level numbers would be quickly recognised, just as is the case for those in the self-driving arena. Existing Tesla’s using Autopilot is known to be at Level 2, while Waymo and its self-driving public tryouts consist of Level 4 vehicles. Autopilot is still at the driver assistance stages, while Waymo is piloting at the autonomous no-driver-needed levels.

At first, it will be hard to promulgate the autonomous levels of AI and law and relinquish the flashier robo-lawyer convention. Gradually and inexorably, it will become apparent that all parties should have an alternative means to realistically establish where AI stands in terms of performing legal reasoning and do so with the quickness of stating what level of autonomy a legal tech system or piece of software has attained.

It is time to drop the robo-lawyer hyperbolic references and embrace a prudent and reasonably crisp set of levels for fairly stating what AI is legitimately contributing to the field of law.

 

Dr. Lance Eliot is chief AI scientist for Techbrium Inc. and Stanford University Fellow for the CodeX: Center for Legal Informatics at the Stanford Law School.

Formerly a professor at USC, his Forbes column on AI has amassed over 3.1 million views, and his most recent book is entitled 'AI and Legal Reasoning Essentials.'

Topics