Next week an important event will take place in the ongoing discussions on the impact of artificial intelligence on lawyers’ ethics. The Law Society is one of the sponsors of the Artificial Intelligence in Legal Services Summit on 4 June in London. And at this summit, the Technology and Law Policy Commission will launch its report on algorithms in the justice system

Jonathan Goldsmith

Jonathan Goldsmith

Last month, an equally important event took place in discussions around AI and legal services. The European Commission launched its ethics guidelines for trustworthy artificial intelligence. Although it surveyed the entire field of AI and not just its use in legal services, the seven key requirements that it identified are directly applicable to our work: human agency and oversight; technical robustness and safety; privacy and data governance; transparency; diversity, non-discrimination and fairness; environmental and societal well-being; and accountability.

I comment below on how I believe those seven requirements apply to lawyers. What is interesting is that there is nothing new needed in a lawyer’s professional code, merely application of our existing values to a new set of circumstances.

Human agency and oversight

Lawyers are expected to supervise and take responsibility for the work that goes out in their name. This applies as much to work undertaken by juniors or paralegals as work undertaken by machines. In order for this to happen, a variety of pre-conditions must be met.

First, lawyers must be trained in new technology, so that they understand what it does. Sometimes this happens – increasingly in the USA – as part of a law degree. Sometimes it happens during the training contract (see the SRA’s report on Technology and Legal Services published late last year).

Second, lawyers must keep up to date with developments in technology. This is now specifically referred to in the commentary to Rule 1.1 (which deals with competence) of the American Bar Association’s Model Rules of Professional Conduct, which states that ‘[t]o maintain the requisite knowledge and skill, a lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology’.

Technical robustness and safety

This is obviously of key importance for lawyers. It arises because of the necessity to keep data safe, and to take steps to guarantee cybersecurity. This means not only complying with certain laws, such as the GDPR, or keeping a firm’s operations out of the reach of criminals, but also implementing one of the core values of lawyers, that of client confidentiality. Without technical robustness and safety, none of these can be assured.

Interestingly, in relation at any rate to robustness and safety, the SRA says: ‘If there is an error or flaw in an AI system run, or provided by, a separate technology company then we are unlikely to take regulatory action where the firm did everything it reasonably could to assure itself that the system was appropriate and to prevent any issues arising’.

Privacy and data governance

This is covered above.

Transparency

The SRA says that firms need to be able to demonstrate that their advice is competent, fair and compliant with other obligations, such as confidentiality and conflict obligations. Without being able to show to clients how automation deals with their data and secrets, firms may be failing to comply with the law and their professional ethical obligations. Firms may also be unaware of biases in their system – see the requirement below. 

Diversity, non-discrimination and fairness

This is likely to be the main thrust of the report to be published next week on algorithms in the justice system. There have already been studies that show that there is post-code and racial bias in AI systems used by the courts and others in predicting certain outcomes. Information which comes out of an automated system, even an intelligent one, is only as good as the data put into it.

Environmental and societal well-being

These are general considerations, without specific application to lawyers alone. For instance, law firms should consider resource usage and energy consumption in the systems they use, and also be sure that their systems do not affect physical and mental well-being. 

Accountability

This complements a number of the requirements already mentioned: that lawyers should take responsibility for the AI that they use, should assess any negative impacts it may have and take action to correct them, and provide redress for those harmed.

The future struggles, I anticipate, are going to take place soon over ‘Human agency and oversight’ and ‘Accountability’. For instance, must all AI be overseen by a lawyer? (In our jurisdiction, the answer is no, other than for reserved activities.) And who is liable for legal services, maybe delivered across borders, by AI but not developed by the provider?