Artificial intelligence is often described as the fourth industrial revolution. Human workers have been displaced by machines before, but AI’s learning and decision-making capabilities present new challenges for the workplace. AI introduces potential legal implications for the employment relationship and employment law may have to adapt to cover the complexities of adding intelligent systems to the human resources equation. 

Leah Caprani

Leah Caprani

Bias-free algorithms

Employing the right people is a key driver for business success and this depends to a large extent on an effective recruitment process. 

AI can streamline recruitment by sifting prospective CVs swiftly, identifying those with the appropriate skills and experience. Removing human bias should also promote diversity in the workplace. For example, gamification software can be used to identify candidates whose capabilities and qualities present the best fit. This works by applying bias-free algorithms that identify criteria based on the company’s existing employees.

But AI recruitment is not without problems. Amazon was forced to abandon its AI recruitment software because it was discriminating against female candidates by rejecting CVs which featured the word ‘women’s’ – because most of Amazon’s early employees were men. The software was perpetuating a human bias that was implicit in the company’s historic data. 

Theoretically, an AI system could apply an algorithm that goes to the other extreme and favours candidates on the basis of protected characteristics such as sex, race or religion/belief. Positive discrimination is unlawful in the UK except in certain specific circumstances and employers may become liable unless it can be shown that the persons who share a protected characteristic are disadvantaged or under-represented and the positive action is proportionate. Therefore, employers implementing AI programmes need to monitor their input and output. They should learn to treat the AI recruitment engine as an employee: train it, monitor it, appraise it and intervene when necessary. 

Part of the team 

Of course, employers will not have to comply with statutory rights, such as the National Minimum Wage or the Working Time Regulations, in respect of robot ‘employees’. But employment policies and legislation may need to be amended to include tougher safeguards to protect human employees from the demands of a 24-hour workplace.

Although AI machines do not get tired or ill, they are not infallible. Robots – however intelligent – are not recognised as legal persons, so who is to blame when they go wrong? At present there can be a dispute about whether an employer is vicariously liable for the acts of their employees during ‘the course of employment’, but will this extend to the conduct of intelligent machines that are capable of some level of independent decision-making? 

If this sounds fanciful, remember that the concept of vicarious liability in relation to employees only evolved in the late 19th century, in response to the social phenomenon of mass employment. So it could change again in light of the technological revolution. Currently, employers are likely to be liable for their employees incorrectly inputting data into an AI system, but what happens if AI uses the right data to make the wrong decision? Or where it is given the capability of making independent decisions? Would it then be required to have its own legal personality in order to give third parties a remedy?

The law will eventually need to deal with these questions. 

In the meantime, AI will pose new questions for employers in evaluating their employees. For example, if AI makes a mistake, was it a ‘learned’ mistake which is not the fault of any of the humans involved, or was it traceable to an error (or even a deliberate act) by a human? 

Human element

Dismissing robot ‘employees’ is easy – they do not have feelings and they cannot object. But what would happen if a robot is the reason for another employee’s dismissal; for example where the employee refuses to comply with the machine’s instructions or the employee is dismissed as a result of data produced from an automated monitoring system? 

The General Data Protection Regulation has introduced new legislative safeguards in respect of profiling and automated decision-making, protecting individuals from decisions based solely on automated processing that have legal or similarly significant effects on them. While automated processing of this sort is lawful in limited circumstances (for example when necessary to enter into or perform a contract), this type of processing should be specifically detailed in the employer’s privacy notice and employees should be able to request human intervention or challenge a decision. 

Hard data alone can be misleading and given that the employee’s conduct could be contributable to a number of external factors, the human element will – and arguably should – remain part of decision-making processes about employees for some time. 

AI can improve efficiency in the workplace, but it cannot work autonomously. Companies should work alongside it rather than relying on it, and put in place adequate safeguards and policies. AI is software – it learns but someone still needs to program and monitor it. Employment law will need to evolve to plug the gaps as technology advances. Ultimately, you cannot take the human out of human resources.

 

Leah Caprani is an employment paralegal at Winckworth Sherwood and an executive committee member of the Junior Lawyers Division

Topics