The European Commission last week announced the publication of a report, whose title (and much of whose content) does not qualify it for the most sensational read of the year. It is called ‘Comparative Law Study on Civil Liability for Artificial Intelligence’. We can easily forgive its dryness, though, because it deals with something which either is, or will soon become, central to our work, namely who is to pay the bill when AI gets it wrong and causes harm.
Before dealing with the substance of the report, I will change topic, but not theme. Reading it reminded me of a message which John Kerry, the United States Special Presidential Envoy for Climate, gave to the American Bar Association last month, when he said: ‘You are all climate lawyers now, whether you want to be or not’. He listed some of the practice areas touched: personal injury and damage to property, bankruptcy, transactional matters, financing, energy, construction, procurement, and land use.
In other words, climate change has become a permeating topic, of which we have to be conscious when advising clients.
Similarly, when reading the European Commission report, I was reminded that we are also all technology lawyers now. AI is just one of the many expressions of technology flooding our lives, affecting clients and lawyers alike. We may not practise within the technical field of IT, but we nevertheless need to know about the systems that we use in the office, the publicly available or bespoke platforms that we use in communicating with clients, the software put in place for dealing with courts and other government agencies, and any other machines or digital systems that might be part of our practice. Ignorance may lead to breach of confidentiality, breach of data protection rules or other liability for an error.
The European Commission’s report points out that AI liability raises particularly difficult issues in law because, by definition, there is no direct human agency in its actions, and the law of tort has been built around human agency. On top of that, ‘the very nature of AI systems and their peculiar features such as complexity, opacity, limited predictability, and openness’ make liability claims even more difficult.
It gives the example of an accident caused by a self-driving car, which may have been caused by hardware or software, and, if software, maybe by an over-the-air update not originating from the car manufacturer but, say, the result of flawed data (either collected by the vehicle itself or by some external provider) or of errors in processing that information.
Interestingly, although it is an EU report, it contains comparative information about the UK, allowing us to see how we deal with tort and AI differently to our neighbours.
There is a surprising quote in its footnotes: ‘The absence of any strict liability for road accidents is perhaps the most marked difference between English law and that of most European countries.’ It follows this by noting ‘the English peculiarity of an insurance-based solution for autonomous vehicles’, explaining that our solution is the Automated and Electric Vehicles Act 2018 (AEVA), which brings self-driving cars in line with driver-led vehicles within our compulsory motor insurance scheme.
Of course, cars which are fully autonomous are not the only issue, because most new cars have semi-autonomous driver assistance technology such as cruise control, self-parking, lane assist and emergency braking, which could make liability issues even more complicated.
The report deals with three use cases for comparative purposes: self-driving cars, autonomous lawn mowers and harvesters, and drones. But it points out that AI can cause damage beyond human bodies and property. For instance, it can breach discrimination and privacy laws when used for predictive purposes within criminal law or fair trade practices.
AI is creeping daily into our professional lives, too. Within law firms, it is being used in the following kinds of areas:
- document automation
- due diligence, including contract review, legal research and electronic discovery
- predictive technology and legal analytics to forecast the outcome of litigation.
Errors could have financial and other consequences. Yet we as lawyers tend to know next to nothing about the assumptions, software, data and other components of the systems we use.
To return to the two examples I have given, both the Law Society and the SRA are well aware of the permeation of technology, although I find it surprising that training for all lawyers on the kinds of IT issues I have raised is not mandatory. It is not enough, in my view, to expect lawyers to know on what subjects they need to be trained.
Regarding climate change, the Law Society is now trying to come to grips with the subject, but you will be hard put to find any substantive mention of the topic on the SRA website, which I find a failing.
Jonathan Goldsmith is Law Society Council member for EU matters and a former secretary general of the Council of Bars and Law Societies of Europe. All views expressed are personal and are not made in his capacity as a Law Society Council member, nor on behalf of the Law Society