There is a blind spot in the AI ethics debate: humans.

You wouldn’t want a surgeon to operate on you without a medical licence. You wouldn’t want an electrician without safety training to wire your property. So why do we allow AI systems to diagnose cancers, decide on benefits applications, or identify criminals without requiring that the individuals who design them be subject to any professional regulation?

Where a job involves opaque expertise and can be dangerous if done badly, we often require practitioners to be regulated. Professional regulation usually involves mandatory training and certification, then ongoing requirements for the rest of an individual’s career. For particularly important professions, it can be illegal to practise without a licence. Lawyers may disagree with some policies and individual decisions of the Solicitors Regulation Authority, but very few would question the need for a regulator at all. The development of AI systems has all these features justifying professional regulation. We need a code for coders.

As AI systems become more powerful, so do their creators. From fears about bias in facial recognition or employment decisions, to concerns over fake news and even existential risk from AI-created pandemics, the dangers of AI are well known. Calls to regulate AI are growing. But discussions to date have tended to focus on the countries and companies providing AI systems rather than the individuals who develop them. The design, data and training of AI systems can all shape how they operate. The effectiveness of any regulatory system depends ultimately on the humans who make these decisions.

For thousands of years, doctors have sworn the ‘Hippocratic Oath’: to do no harm. It has evolved into a rigorous and detailed regulatory code for medical professionals. Doctors who don’t live up to these requirements can be sanctioned or even struck off. As a result, the public can generally rely on doctors to act in their best interests. Professionalisation can also be an important counterbalance to institutional power. Even the most junior lawyer is encouraged to speak up and question their superiors, if they feel something is not right. We need a similar mechanism for AI developers to instil shared values dedicated to safety, security and trustworthiness.

How do we decide who should be subject to regulation, given that AI developers can come from very different professional and academic backgrounds? We can learn from the UK’s financial services industry, which comprises individuals with a wide range of job titles and backgrounds, and where roles are continually evolving. In response, individuals are regulated by their function rather than their job title or academic qualifications. We would therefore use a definition of ‘AI professional’ that focuses on function rather than terminology: ‘an individual who develops AI systems in the course of business’. Developing AI systems in the course of business means being paid for it. So an individual who works for a not-for-profit AI company would still be developing AI in the course of business provided they are receiving a salary.

Professionalising AI is not a panacea. It is just one piece of the puzzle, which ought to be fitted into place alongside other regulatory mechanisms in a wider ecosystem. This approach of overlapping schemes is common in many other areas. In the financial services industry, some individuals are regulated, but so too are their employers. If the individuals or firms act improperly then they might be fined or banned from an industry, but they might also be sued in courts by any individuals harmed. We can and should adopt the same approach for AI, where we need rules for humans, but also rules for the technology itself and the corporations building it.

One objection to professionalising AI development is that it would hamper growth. However, any costs would be outweighed by the benefits. Individuals who satisfy professional requirements would gain status, as well as obtaining a ethical foundation to guide them through challenging and socially important issues. Firms employing AI professionals would gain an additional source of assurance that their AI systems will function consistently with societal and regulatory expectations, helping to drive uptake of the technology. Professionalising the AI workforce would enable the UK to compete on quality, rather than engaging in a regulatory race to the bottom.

The UK is already a world leader in professional services. They make up 80% of our GDP and the UK is the world’s second-largest exporter, after the US. One reason why UK lawyers, accountants and financial service personnel are so well regarded internationally is because these are all highly regulated professions where a uniform minimum standard of quality and integrity can largely be guaranteed.

The UK has placed itself at the forefront of AI regulation by hosting the Bletchley Park AI safety summit a few weeks ago, which brought together 28 of the world’s major nations to discuss the dangers arising from unconstrained AI use. However, we risk losing momentum and falling behind internationally if further steps are not taken. No country has yet proposed professionalising AI. The UK should be the first.

Jacob Turner is a barrister at Fountain Court Chambers and the author of ‘Robot Rules: Regulating Artificial Intelligence’ . Tristan Goodman is an AI policy professional and was previously a solicitor at a City law firm.

Topics