Google recently responded to the Court of Justice of the European Union decision in Google Spain v González [2014] EUECJ C-131/12 (13 May) by announcing a new online form by which individuals can require the removal of links responsive to searches conducted on their names. It is a bold move and one that will take an army of resources to police.

The most critical questions for Google’s lawyers and newly recruited paralegals when reviewing a deluge of new take-down/blocking requests will be:

  • when is the data processing complained of unlawful within the EU data protection regime; and
  • when does Google have the requisite knowledge to make such processing unlawful?

It is not sufficient for an individual to write to Google and ask them to block all their personal data from its search results because he or she is a private person or is embarrassed by the data. The processing of the data by Google search (not necessarily the original publisher) must be contrary to the data protection principles established by the EU Data Protection Directive, or more accurately the legislation that implements the directive in the relevant member state of the EU. For example, a UK citizen would need to demonstrate a breach of the Data Protection Act 1998 by reference to the data protection principles set out in schedules 1 and 2 of the act.

Each European jurisdiction has developed its own body of case law and regulatory decisions regarding unlawful data processing. All of that case law will now need to be seen in light of the guidance provided by the CJEU in Google Spain.

But what exactly is the threshold?  Many observers, particularly those steeped in first amendment law in the US, would be forgiven for thinking there is barely any threshold at all. But a closer look at the CJEU’s reasoning is required.

The key phrase in the judgment is at paragraph 92, which provides the rather unhelpful guidance that the data will be unlawfully processed when it is:

  • ‘inadequate, irrelevant or excessive in relation to the purposes of the processing’;
  • ‘not kept up to date’; or
  • ‘kept for longer than is necessary unless they are required to be kept for historical, statistical or scientific purposes’.

It is hard to imagine a more opaque set of guiding principles. A barrage of questions emerges:

  • When is data irrelevant and by what standard is relevance judged? Is this some kind of public interest test or can it be relevant to a small group of people? Presumably, as per the judgment, relevance fades over time, but how long does it take for historical records to become irrelevant? When is the data ‘no longer’ relevant? So far we have one data point: 16 years for a repossession notice is too old. But how can we apply these vague principles to other situations?
  • What does ‘excessive’ mean? That 10 Google search results containing the same personal data should be treated differently from a single search result?  If data was true at the time of publication (for example, that an individual has a serious illness), does it become ‘out of date’ when the facts change (for example, the illness is cured)? Does its position in the Google search rankings matter? It is interesting that the CJEU pointed out the impact that a search result could have on an individual’s privacy, noting that it can often give the information much more prominence than if it were merely left to the third-party website on which it is published (see below). One may take from this that its position in the search rankings would matter, but the court has not quite said as much.
  • When is data required for statistical purposes? Google does not exist for statistical purposes. It simply indexes data and statistics stored on other websites. So would this exception be relevant to Google? How will such statistics be found if not through search engines?

Google’s ability to collate data is also key. The CJEU stated at paragraph 80: ‘It must be pointed out at the outset that… processing of personal data, such as that at issue in the main proceedings, carried out by the operator of a search engine is liable to affect significantly the fundamental rights to privacy and to the protection of personal data when the search by means of that engine is carried out on the basis of an individual’s name, since that processing enables any internet user to obtain through the list of results a structured overview of the information relating to that individual that can be found on the internet – information which potentially concerns a vast number of aspects of his private life and which, without the search engine, could not have been interconnected or could have been only with great difficulty – and thereby to establish a more or less detailed profile of him. Furthermore, the effect of the interference with those rights of the data subject is heightened on account of the important role played by the internet and search engines in modern society, which render the information contained in such a list of results ubiquitous.’

While unhelpful to Google, this passage will at least be seized upon by other website operators and platforms (and Google’s other services) to argue that the decision applies only to search engines and their unique indexing qualities and coverage across the web. But we will have to wait and see just how narrowly it will be interpreted and whether websites with search functions will also be caught by these broad principles.

A more fundamental question is whether it is right that Google and other search engines should have to take the role of judge and jury in determining the answers to these questions. In this respect, it is important to consider how the threshold for ‘unlawful data processing’ sits alongside the ‘safe harbour’ defence provided by the ‘mere conduit’, ‘caching’ and ‘hosting’ defences provided by articles 12, 13, and 14 of the E-Commerce Directive.

In the case of Metropolitan Schools v Google [1] in the UK, Google was classified as a ‘mere conduit’ in relation to its search results, even having been notified of unlawful (libellous) content being returned, and so was not liable for it. This decision was not too far behind the protection that would be afforded to Google in the US by virtue of section 230 of the Communications Decency Act 2000.  However, following Google Spain, regardless of that decision in relation to Google as an intermediary search engine, it can now be liable as a data processor – effectively as a primary publisher.

So, in other words, the law of intermediary liability has been turned on its head. What Google Spain potentially does is remove knowledge as a key requirement to impose liability on internet intermediaries if the processing (without knowledge of the unlawful content) can be characterised as unfair within the EU data protection regime. In practice, however, individuals will have to write to Google or use the online form and join the long queue of requests. If they are not patient enough, they may be able to complain both to the courts and the relevant information regulator.

Google can expect a bit of head-scratching over the next few months as individuals and their lawyers look at their take-down requests based on grounds of defamation, malicious falsehood, copyright, breach of confidence, and breach of privacy to see whether they can be recast as claims for unlawful data processing. And that will be happening in 28 European jurisdictions. Separating the wheat from the chaff will be no mean feat.

 Ashley Hurst is a partner specialising in media and internet disputes at Olswang