Here’s an experiment. Ask Google's online translation site to translate the phrase ‘She is a lawyer’ into Finnish. Take the result and translate it back into English (or into French, German or Japanese). It will be rendered as ‘He is a lawyer’. (The trick works in reverse with ‘He is a teacher’.)

Don't blame Google's algorithm for the everyday sexism. Finnish, like Turkish, has no distinction between 'He is' and 'She is', so in translating the phrase into English the software looks for the most commonly paired pronoun across the web. For obvious reasons it will quickly identify clusters around the pairings of 'he' and 'lawyer', and present this as the correct outcome. 

I learned this trick from Hetan Shah, executive director of the Royal Statistical Society, who gave a similar instance to the House of Commons Science and Technology Select Committee this week. It's a perfect example of how computer software can replicate and even amplify human bias - not because bias is deliberately written in to the decision-making algorithm but because the data from which it draws to replicate human intelligence themselves reflect human bias.  

This matters. Computer systems based on algorithms which learn by spotting patterns in real-world data (which is what we call 'artificial intelligence') already make important decisions about us, for example whether to grant a loan or pitch us a special offer. In the very near future they may be deciding whether or not we have committed a criminal offence, or at least the route through which it will be prosecuted, which can amount to the same thing. 

Hence the interest of the select committee, which this week was taking evidence from academics and legal bodies including the Law Society on algorithms in decision-making.

Predictably, the main focus of the MPs' questions seemed to be how to step up regulation. On the face of it, this is an attractive idea. At the moment, the main legal rein is the upcoming General Data Protection Regulation, and its expression in domestic law through the Data Protection Bill currently going through the Lords. In theory, this will give individuals a right to opt out of having automated decisions made about them when they would have a 'significant effect'. But, as Dr Sandra Wachter of the Oxford Internet Institute told the committee, this will be impossible to enforce unless people have a right to know how the decision was taken. 

One solution may be to emulate New York City, which is seeking to require public agencies using decision-making algorithms to publish the source code. Such a move would be vigorously fought by players such as Google, which has already told the committee that full transparency would help 'bad actors' such as hackers and people attempting to game the system. This argument is unlikely to wash. A better objection is that publishing thousands of lines of source code is not in itself a great step to transparency. For most people, it would be better to have an understandable explanation of what decisions have been taken by algorithms and, crucially, what it would have taken for the decision to go the other way. We should also have a right to audit the 'training data' from which the algorithm learns to spot patterns.

So who is going to enforce such measures? One proposal doing the rounds is for a new licensing body, call it 'OffAlg', to vet and approve decision-making algorithms. This should be treated with extreme caution. If the government is serious about making the UK a world leader in artificial intelligence, it would be a bizarre step to risk stifling innovation just as we leave the stifling embrace of the EU's 'precautionary principle'.

In any case, an AI program is not a fixed product, like a new drug, to be tested and licensed for release. It is an iterative process, in which the program's decisions are continually refined from exposure to real-world data. It is these decisions that should be probed for bias, preferably by existing watchdogs with the expertise to probe for flaws in the data. Professor Louise Amoore of Durham University told the committee: ’It is extraordinarily difficult to remove bias, so we should begin from the position that it’s there - and work from there.’  

The outcome for individuals may be more critical than irritation at Google Translate's everyday sexism.