The Legal Geek Hackathon was a 24-hour sprint to find an innovative, tech-based solution to the challenges of modernising our courts and increasing access to justice. Our team – a joint entry from the Law Society and Wavelength Law – combined policy, law and tech, all working together. I am pleased to say we won.

What stood out from the event is that the best solutions will arise through collaboration. Developments in isolated disciplines – with tech in one space, academics and policy people in another, and legal experts somewhere else – will not harness the true potential of tech.

We also need to move away from the attitude of ‘I know it’s important, but I don’t have time to get involved in this debate’. This is no longer ‘years off’, it is happening.

Consider the use of predictive analytics in helping inform parole outcomes in the US; machine learning-based due diligence software; and the reality of cyber-attacks.

The opportunities and risks are relevant to all forms of law, all types of firm and in-house teams, and all clients. The best way to mitigate those risks is to be part of the design and thinking.

But what do I mean by collaboration? The first step is to engage with the discussions and form your own views. Unlike some, I do not believe the ‘era of the expert’ is over, but we have to think of better ways to engage in the public arena. Knowledge-based decision-making is vital.

Second, encourage colleagues and other interested parties to do likewise – through firms, peer groups and social media.

Third, and most importantly, if we are to reap the maximum benefits these discussions need to be multidisciplinary. There are countless events convening people working in artificial intelligence (AI), tech start-ups and other stakeholders. The Law Society is itself to hold more events.  

Some of these debates will begin to frame the ‘new normal’, setting the tone of what is deemed to be ‘acceptable’ and bringing into sharp relief the areas which could most benefit from the application of technology.

Microsoft CEO Satya Nadella set out 10 principles for AI in a recent essay. What do you think?

In Davos earlier this year, meanwhile, the theme of the Fourth Industrial Revolution set thousands of conversations going. I conclude we have a choice as a society. We face critical decisions about what we do with the tech we are now capable of building. As UCL’s Sylvie Delacroix has said, we should be careful not to abdicate our moral compass.

Those decisions need to be arrived at through public debate. Consider how momentous they are. If, hypothetically, we could replace all employees engaged in a specific type of office work with automated processes, should we? What are our obligations to those people? What would this do to the fabric of society?

Above all, get involved. A plurality of voices is vital in co-creating a future in which we all have a stake.

AI principles

  • AI must be designed to assist humanity
  • AI must be transparent
  • AI must maximise efficiencies without destroying the dignity of people
  • AI must be designed for intelligent privacy
  • AI needs algorithmic accountability so humans can undo unintended harm
  • AI must guard against bias
  • It is critical for humans to have empathy
  • It is critical for humans to have education
  • The need for human creativity will not change
  • A human has to be ultimately accountable for the outcome of a computer-generated diagnosis or decision

Source: Microsoft

Sophia Adams Bhatti is director of legal and regulatory policy at the Law Society. The views expressed here are personal.

Topics