Artificial intelligence is being tested to monitor offenders on licence to 'prevent crimes before they happen'. In a pilot of new monitoring procedures announced by the government, offenders on licence will have to answer to remote check-in surveillance on their mobile devices. Offenders will be required to record short videos of themselves, AI will then be used to confirm their identity. Offenders will also be required to answer questions about their behaviour and recent activities.

Any attempts to ‘thwart the AI ID matching or concerning answers will result in an instant red alert…sent to the probation service for immediate intervention, helping prevent crimes before they happen’, the Ministry of Justice said.

The system is being trialled in four probation regions, the south west, north west, east of England and Kent, Surrey and Sussex, before it is to be considered for further rollout with additional tech add-ons such as GPS location verification.

Lord Timpson, minister for prisons, probation and reducing reoffending, said: ‘This new pilot keeps the watchful eye of our probation officers on these offenders wherever they are, helping catapult our analogue justice system into a new digital age. It’s bold ideas like this that are helping us tackle the challenges we face.’

Civil liberties group Big Brother Watch warned of privacy and security risks arguing ‘systems designed to pre-empt harm, disorder, or crime, which often operate under the assumption that it is necessary to acquire as much data as possible…are prone to expansion’. 

A man holds a phone in his hands

Offenders on licence will have to answer to remote check-in surveillance on their mobile devices

Source: iStock

In a report, The dangers of digital ID and why privacy must be protected, the group said ‘function creep’ would be a risk with digital IDs, while the data collated would be ‘valuable to hackers’ citing the cyber-attack on the Legal Aid Agency earlier this year.

It added: ‘Although they might be intended for narrow and specific purposes, digital identification systems are prone to expansion. This is particularly true of systems designed to pre-empt harm, disorder, or crime, which often operate under the assumption that it is necessary to acquire as much data as possible. There are serious concerns about the security of digital ID systems. Centralised identity systems that are capable of uniquely identifying an individual and troves of deeply personal data are particularly valuable to hackers.’