The idea of a ‘duty of care’ is well established. Such a duty can arise in common law or in statute. It is this latter duty of care Professor of Internet Law Lorna Woods and I envisage for regulating social media in a new project with Carnegie UK Trust.

William Perrin

William Perrin

Many commentators have sought an analogy for social media services as a guide for the best route to regulation – a common comparison is that social media services are ‘like a publisher’. In our view the main analogy for social networks lies outside the digital realm. When considering harm reduction, social media networks should seen as a public place – like an office, bar, or theme park. Hundreds of millions of people go to social networks owned by companies to do a vast range of different things and in our view, they should be protected from harm.

The law has proven very good at this in physical settings. Workspaces, public spaces, houses, consumer products owned or supplied by companies have to be safe for the people who use them. By making companies invest in safety the market works better as the company bears the full costs of its actions, rather than getting an implicit subsidy when society bears the costs.

Duties of care are expressed in terms of what they want to achieve (ie the prevention of harm) rather than necessarily regulating the steps of how to get there. This fact means that duties of care work in circumstances where so many different things happen that you couldn’t write rules for each one. This generality works well in multifunctional places like houses, parks, grounds, pubs, clubs, cafes, offices and has the added benefit of being to a large extent futureproof. Duties of care set out in law 40 years ago or more still work well – for instance the duty of care from employers to employees in the Health and Safety at Work Act 1974 still performs well, despite today’s workplaces being profoundly different from 1974’s.

In our view the generality and simplicity of a duty of care works well for the breadth, complexity and rapid development of social media services, where writing detailed rules in law is impossible. By taking a similar approach to corporate owned public spaces, workplaces, products etc in the physical world, harm can be reduced in social networks. Making owners and operators of the largest social media services responsible for the costs and actions of harm reduction will also make markets work better.

When parliament set out a duty of care it often sets down in the law a series of prominent harms or areas that cause harm. This approach has the benefit of guiding companies on where to focus and makes sure that parliament’s priorities are not lost. We propose setting out the key harms that qualifying companies have to consider under the duty of care, based in part on the government’s Internet Safety Green Paper.

We list here some areas that are already a criminal offence – the duty of care aims to prevent an offence happening and so requires social media service providers to take action before activity reaches the level at which it would become an offence.

  • Harmful threats – statement of an intention to cause pain, injury, damage or other hostile action such as intimidation. Psychological harassment, threats of a sexual nature, threats to kill, racial or religious threats known as hate crime. Hostility or prejudice based on a person’s race, religion, sexual orientation, disability or transgender identity. We would extend hate crime to include misogyny.
  • Economic harm – financial misconduct, intellectual property abuse.- Harms to national security – violent extremism, terrorism, state sponsored cyber warfare.
  • Emotional harm – preventing emotional harm suffered by users such that it does not build up to the criminal threshold of a recognised psychiatric injury.
  • Harm to young people – bullying, aggression, hate, sexual harassment and communications, exposure to harmful or disturbing content, grooming, child abuse.
  • Harms to justice and democracy – prevent intimidation of people taking part in the political process beyond robust debate, protecting the criminal and trial process.

We would also require qualifying social media service providers to ensure that their service was designed in such a way to be safe to use, including at a system design level. This represents a hedge against unforeseen developments as well as being an aggregate of preventing the above harms. We have borrowed the concept from risk based regulation in the General Data Protection Regulation and the Health and Safety at Work Act which both in different ways require activity to be safe or low risk by design.

People would have rights to sue eligible social media service providers under the duty of care. But, given the huge power of most social media service companies relative to an individual we would also appoint a regulator to ensure that companies have measurable, transparent, effective processes in place to reduce harm, so as to help avoid the need for individuals to take action in the first place. The regulator would have powers of sanction if the regulated did not comply.

William Perrin is a trustee of the Carnegie UK Trust, Lorna Woods is professor of internet law at the University of Essex.