Law enforcement and criminal justice authorities are increasingly using artificial intelligence (AI) and automated decision-making (ADM) systems in their work.
These systems can be used to profile people as criminal, ‘predict’ their actions, and assess their risk of certain behaviour, such as committing a crime, in the future. This can have devastating consequences for the people involved, if they are profiled as criminals or considered a ‘risk’ even though they haven’t actually committed a crime.
These criminal ‘prediction’ systems or ‘predictive’ policing are no longer confined to the realms of science fiction. These systems are being used by law enforcement around the world. And predictions, profiles, and risk assessments that are based on data analysis, algorithms and AI, often lead to real criminal justice outcomes. This can include monitoring or surveillance, repeated stop and search, questioning, fines and arrests. These systems can also heavily influence prosecution, sentencing and probation decisions
‘Predictive’ policing & criminal ‘prediction’ systems
Law enforcement and criminal justice authorities are increasingly using big data, algorithms and artificial intelligence (AI) to profile people and ‘predict’ whether they are likely to commit a crime.
‘Predictive’ policing and criminal ‘prediction’ systems have been proven time and time again to reinforce discrimination and undermine fundamental rights, including the right to a fair trial and the presumption of innocence. This results in Black people, Roma, and other minoritised ethnic people being overpoliced and disproportionately detained and imprisoned across Europe.
For example, in the Netherlands, the ‘Top 600’ list attempts to ‘predict’ which young people will commit certain crimes. One in three of the ‘Top 600’ – many of whom have reported being followed and harassed by police – are of Moroccan descent. In Italy, a predictive system used by police called Delia includes ethnicity data to profile and ‘predict’ people’s future criminality. Other systems seek to ‘predict’ where crime will be committed, repeatedly targeting areas with high populations of racialised people or more deprived communities.
Only an outright ban on these systems can stop this injustice. We have been campaigning for a prohibition in the European Union’s Artificial Intelligence Act (AI Act), and other initiatives at international and national level.