Leading IT companies sidestep potential legal liabilities arising from online hate speech. What should we do about it?

The president of the American Bar Association (ABA) has just issued a statement expressing alarm at the recent rise in hate speech and violence. ‘Be assured that the ABA has committed the full force of its more than 400,000 members to fight this heinous behaviour and those who support it through word or deed.’ (Well, the ‘more than 400,000’ may be doubted, since it is clear from a trawl of stories on the ABA website that some lawyers have themselves been guilty of alarming hate speech.)

As we know, hate crime is on the rise in a number of countries, and sometimes we know why. In the UK, the Metropolitan Police commissioner reported a ‘horrible spike’ in hate crime after the Brexit vote. The Brexit vote and the Trump election appear to have released demons. Lawyers will now be advising and acting for the victims.

I want to focus here not on the criminal side, but on the legal framework of other aspects of action which can be taken, where there are gigantic underlying legal problems.

The internet has played its own magnifying role in hate speech. People find it easier to write abuse on a screen than to say it face-to-face. The European Commission this year tried to deal with this by agreeing a ‘Code of conduct on countering illegal hate speech online’ with four big players in speech platforms – Facebook, Microsoft, Twitter and YouTube.

The code of conduct’s core duty is: ‘Upon receipt of a valid removal notification, the IT companies to review such requests against their rules and community guidelines and where necessary national laws transposing the Framework Decision 2008/913/JHA’ (on combating certain forms and expressions of racism and xenophobia).

You will note that the IT companies agree primarily to review reported hate material against their own terms of service – and only ‘where necessary’ against the law. In other words, private companies are turned into enforcers of the law, but subject to certain strange conditions: first, only when breaches are reported to them; second, only when they are breaches as judged primarily against their private terms and not against the applicable law; third, without any commitment to remove hate speech, but only to review it; and fourth, it is all voluntary anyway. The companies’ terms are not uniform and are self-created.

Although the code was signed only in May of this year, the commission has already published a review of how it is working. The IT companies’ actions were monitored by a limited selection of anti-hate-speech NGOs, which means that the results are partial.

They show that there were 600 reports over 6 weeks, of which 45% were on Facebook. Anti-Muslim and antisemitic hate speech made up 43% of the references. The great majority were dealt with within 48 hours, but resulted in removals of the speech in only 28% of the cases.

Germany is particularly strong on measures to force the IT companies to take action in this area, and its own government research reflects the same results: the IT companies are taking down less than 50% of the reported illegal material (with Twitter, the figure is as low as 1%). Germany is now threatening serious fines - on the basis of global annual turnover, or on-the-spot fines of up to €500,000 - if the IT companies do not remove illegal posts within 24 hours.

And search engines such as Google are likely to be brought within the scope of the law.

The two gigantic legal questions lurking behind this mess are the following. First, are the IT companies publishers? If so, their liability for hosting hate speech would be clearer. They deny it. They say they are merely enablers. This is very like the Uber case now before the European Court of Justice: is Uber a taxi firm or an enabling electronic platform bringing provider and customer together (with very different regulatory consequences, depending on the answer)?

In both cases, the IT companies act in very strongly similar ways to the things that they deny they are - publishers or taxi companies - and by denial are able to wriggle out of a host of major legal liabilities. Given the way that electronic platforms are supplanting the provision of so many services, including legal services, this legally disruptive aspect of IT development needs to be settled soon.

Second, is it right that private companies are turned into enforcers of the law? It is a privatisation of law enforcement, imposing duties on companies which conflict with their own powerful motivations. Is this how we want our law enforced, by it being delegated to conflicted private companies?

It is lawyers who will have to take action in due course on behalf of client victims to resolve these legal problems.

Jonathan Goldsmith is a consultant and former secretary-general at the Council of Bars and Law Societies of Europe, which represents around a million European lawyers through its member bars and law societies. He blogs weekly for the Gazette on European affairs