Public bodies use opaque algorithmic systems to make or inform important decisions about our lives. From prison categorisation to suspension of benefits and investigations before marriage, algorithms are playing an increasing and secret role in the decision-making process. 

Alexandra Sinclair

Alexandra Sinclair

But algorithmic decision-making systems can be prone to discriminate and exacerbate existing biases. They can also make unfair or unlawful decisions. That is why transparency as to how the systems work is vital – so why the secrecy? Government departments warn that if the public are given too much information, people will game the system. So what is gaming? Gaming is when someone changes their behaviour to get a more favourable outcome from the algorithm without actually changing the characteristic that the algorithm is attempting to measure.

Government reliance on gaming to evade transparency is problematic. This is not only because it is often used as a blanket argument against disclosure, but also because it denies society improvements in decision-making that could result, with transparency improving scrutiny and hence reliability and accountability.

The idea of ‘gaming’ has been with us well before algorithmic decision-making. There are understandable fears that revealing the inner workings of government could lead to the manipulation and abuse of such information. Both governments and the private sector cite ‘gaming’ as a reason for non-disclosure. For this reason, the fraud detection methods of credit card companies are not made public. Information about law enforcement techniques and surveillance practices are suppressed on the basis that disclosure may lead to the loss of effectiveness. The Freedom of Information Act effectively embeds the idea that disclosure can potentially lead to gaming and abuse by exempting disclosure wherever that will prejudice ‘crime prevention and enforcement’ or ‘immigration control’.  

Government anxiety about ‘gaming’ has become a go-to reason for not disclosing information about the algorithms used in decision-making. In their joint report on algorithmic accountability in the public sector, the Ada Lovelace Institute and AI Now Institute observed that a ‘recurrent concern raised by public officials’ was that transparency would allow the system to be gamed by bad actors.

In 2020 the Independent Chief Inspector of Borders and Immigration recommended the Home Office publish as much information as possible about the visa streaming tool used to assist it in reviewing visitor visa applications to the UK. The Home Office refused, stating that it would help ‘unscrupulous parties to actively manipulate the immigration system’ and undermine the UK’s international relations.

The Greater Manchester Coalition of Disabled People asked the Department for Work and Pensions to disclose the factors used by its benefit fraud detection algorithm, also known as the General Matching Service, after its members were repeatedly investigated for benefit fraud. But the DWP would not provide disclosure of any of the factors used by the algorithm.

After discovering that the Home Office uses a triage model for determining which marriages between a UK and a non-UK national should be investigated as potential shams, the Public Law Project lodged a freedom of information request asking for the risk factors used. The Home Office disclosed some matters that feed into the system but denied disclosure of all criteria as it would prejudice immigration control and create a ‘gaming risk’.

But what does the academic literature say about the extent to which algorithms can be gamed if information is disclosed?

Not all features of an algorithm can be gamed; fixed and immutable characteristics such as height, weight, gender or ethnicity are not gameable because they cannot be altered. Algorithms which use factors not based on user behaviour ‘offer no mechanism for gaming from individuals who have no direct control over those attributes’. For example, if nationality had been used as a factor in the visa screening tool, as alleged in the legal challenge that ultimately led to its suspension, disclosing that factor would not allow for gaming because a person cannot change their nationality. But transparency over the feature would have allowed for a conversation over whether nationality was a relevant criterion to include in the tool.

Another key theme in the literature is that algorithms are most manipulable when they use factors that are solely based on self-reporting. Where the decision-maker deploying the algorithm is independently verifying the criteria, fear of manipulation of the algorithm should be less of a concern. For example, an algorithm that reviews job applications might use grades or qualifications as a factor. An applicant is unlikely to attempt to ‘game’ the algorithm by submitting fake grades or a forged degree, even if they are told that good grades is one of the features used by the algorithm, because those factors are likely to be independently verified.

Furthermore, algorithms are also difficult to ‘game’ if only the features of the algorithm and not their weightings and how they operate in combination are disclosed. This means decision-makers can ‘disclose significant information about what features are used and how they are combined without creating a significant gaming threat’. For this reason banks and credit bureaux disclose the factors used in calculating credit scores and loan approval calculations without disclosing the exact weights.

What these examples show is that the question of whether an algorithm can actually be gamed is a highly fact-dependent and contextual question. Governments should not be able to invoke the ‘risk of gaming’ in a blanket manner when transparency is requested. They need to think hard about the extent to which the algorithm actually can be gamed and whether there are features that can be disclosed. Fundamentally, if the algorithm is very easy to manipulate it is probably not very good at assessing the qualities it is supposed to be assessing in the first place.

 

Alexandra Sinclair is a research fellow at the Public Law Project