The draft Online Safety Bill has finally been published, delivering on the government’s manifesto commitment to make the UK ‘the safest place in the world to be online’. The bill has its genesis in the Online Harms White Paper, published over two years ago in response to widespread concern at the malign underbelly of the internet. But following passionate lobbying by stakeholders, is the result a bill which has tried so hard to please all interested parties it has actually ended up satisfying no one?
Elusive duty of care
The cornerstone of the bill is a new ‘duty of care’ placed on service providers to protect individuals from ‘harm’. It will apply to providers both based in the UK and – nebulously – those having ‘links’ here. In the government’s sights is the gamut of illegal and legal online content, from child sexual exploitation material and terrorist activity to cyber-bullying and trolling.
The ‘duty of care’ will apply to search engines and providers of internet services that allow individuals to upload and share user-generated content. In practical terms, this net will catch social media giants such as Facebook as well as less high-profile platforms such as public discussion forums.
As regards illegal content, the duty will force all in-scope companies to take proportionate steps to reduce and manage the risk of harm to individuals using their platforms. High-risk ‘category 1’ providers – the big tech titans with large user-bases, or those which offer wide content-sharing functionality – will have the additional burden of tackling content that, though lawful, is deemed harmful, such as the encouragement of self-harm and misinformation.
Adding a further level of complexity, the regulatory framework will apply to both public communication channels and services where users expect a greater degree of privacy, for example online instant messaging services and closed social media groups.
Quite how service providers will be expected to meet these onerous new obligations is not specified in the bill. They must wait for full codes of practice to be issued.
Rabbits from the hat
Sensitive to public pressure, the government has built on early iterations of its proposals to include new measures addressing concerns raised during the consultation process over freedom of expression, democratic debate, and online scams.
The initial release of the Online Harms White Paper triggered a furore over the potential threat to freedom of speech, with campaigners fearing the proposals would have a chilling effect on public discourse as service providers self-censored rather than face swingeing regulatory penalties for breaches in relation to ill-defined harms. In response to such concerns, service providers will be expected to have regard to the importance of protecting users’ rights to freedom of expression when deciding on and implementing their safety policies and procedures.
Concern has been building for some time about the influence which the largest social media companies potentially wield over political debate and the electoral process. This was seen most starkly in the US during the presidential election, where some platforms may have felt like political footballs in their own right. While there are only distant echoes of that here, the role which social media plays in UK democratic events has attracted attention and, in a nod to this, the government has proposed a new duty on category 1 providers to protect ‘content of democratic importance’.
Such content is opaquely defined as ‘content that is, or appears to be, specifically intended to contribute to democratic political debate in the United Kingdom…’. Service providers affected might well be left scratching their heads about quite how they will be supposed to interpret and satisfy this obligation, and it is to be hoped that the eventual codes of practice will provide some much-needed clarity. Absent such guidance, the risk is that they will be pilloried by all sides.
Following a vocal campaign by consumer groups, industry bodies and parliamentarians, the government appears to have capitulated to pressure to include measures bringing online scams within the scope of the bill. E-commerce fraud is estimated to be up 179% over the last decade, with romance scams alone resulting in UK losses of £60m in 2019/20; all service providers will be required to take measures against these illegal online scourges. Commentators have noted, though, that frauds committed via online advertising, cloned websites and emails will remain outside the bill’s ambit, leaving many investors still vulnerable to fraud.
A fierce watchdog?
This groundbreaking regulatory regime will be enforced by a ‘beefed-up’ Office of Communications (Ofcom) which will wield an arsenal of new powers including fines and, in the last resort, business-disruption measures. Penalties of up to £18m or 10% of annual global turnover (whichever is greater) will be at the regulator’s disposal. Those calling for senior management liability will, however, be disappointed. The bill will not impose criminal liability on named senior managers of in-scope services, though the secretary of state has reserved the power to introduce such liability in the future.
It remains to be seen how the juxtaposition of online safety, and freedom of expression and democracy will play out. Service providers and Ofcom alike will no doubt have their hands full trying to decipher just how to moderate lawful but harmful online content, while also ensuring users’ freedom of expression and democracy are not adversely affected.
Greta Barkle is an associate and Guevara Leacock a legal assistant at BCL Solicitors