
Mass Surveillance
Why ‘Predictive’ Policing Must be Banned
The UK Government is trying to use algorithms to predict which people are most likely to become killers using sensetive personal data of hundreds of thousands of people. The secretive project, originally called ‘The Homicide Prediction Project’ was discovered by Statewatch. They described how “data from people not convicted of any criminal offence will be used as part of the project, including personal information about self-harm and details relating to domestic abuse.”
It may sound like something from a sci-fi film or dystopian novel, but the “Homicide Prediction Project” is just the tip of the iceberg. Police forces across the UK are increasingly using so-called “predictive policing” technology to try to predict crime. Police claim these tools “help cut crime, allowing officers and resources to be deployed where they are most needed.” In reality, the tech is built on existing, flawed, police data.
As a result, communities who have historically been most targeted by police are more likely to be identified as “at risk” of future criminal behaviour. This leads to more racist policing and more surveillance, particularly for Black and racialised communities, lower income communities and migrant communities. These technologies infringe human rights and are weaponised against the most marginalised in our society. It is time that we ban them for good.
That is why we are calling for a ban on predictive policing technologies, which needs to be added to any future AI Act, or the current Crime and Policing Bill. We are urgently asking MPs to demand this ban from the government, before these racist systems become any further embedded into policing.
The illusion of objectivity
The Government argue that algorithms remove human bias from decision-making. Instead, these tools are only as “objective” as the data they are fed. Historical crime data reflects decades of racist and discriminatory policing practices, for example, targeting poorer neighbourhoods by labelling them “crime-hotspots” and “microbeats” synonymous with drugs and violence and racial profiling by using the language of “gang” and “gang-affiliated” as a dog whistle for young Black men and boys. When algorithms are built on discriminatory data, they don’t neutralise bias, they amplify it.
There are two main types of “predictive policing” systems: those which focus on geographies seeking to “predict” where crimes may take place and those which aim to “predict” an individuals likelihood of committing a future crime.
In 2021, the Home Office funded 20 police forces from across the UK to roll out a geographic “predictive policing” programme called ‘Grip.’ The tech was described as “a place-based policing intervention that focuses police resources and activities on those places where crime is most concentrated.”1
However, research by Amnesty International has highlighted that there has been no conclusive evidence to demonstrate that the programme had any impact on crime. What’s more, there is evidence that the programme reinforced and contributed to racial profiling and racist policing.
Rather than investing in addressing the root causes of crime, such as the rising cost of living and lack of access to mental health services, the Government is wasting time and money on technologies that automate police racism and criminalise entire neighbourhoods.
Lack of transparency and accountability
So-called “predictive policing” systems are not only harmful in that they reinforce racism and discrimination; there is also a lack of transparency and accountability over their use. In practice, this means people often do not know when or how they, or their community have been subject to “predictive policing,” but they can still be impacted in various areas of their life.
This includes being unjustly stop-and-searched, handcuffed and harassed by police. However, because data from these systems is often shared between public services, people can experience harms in multitude areas of their life, including in their dealings with schools and colleges, local authorities and the Department for Work and Pensions. This can affect people’s access to education, benefits, housing and other essential public services.
Even when individuals seek to access information on whether they have been profiled by a tool, they are often met with blanket refusals or contradictory statements. The lack of transparency means people often cannot challenge how or why they were targeted, or all the different places that their data may have been shared.
In an age where “predictive policing” technologies are being presented as a silver bullet to crime, police forces should be legally required to disclose all the “predictive policing” systems that they are using, including what they do, how they are used, what data operationalises them and the decisions they influence.
It should also be legally required that individuals are notified when they have been profiled by “predictive policing” systems, with clearly defined routes to challenge all places that their data is being held. Without full transparency and enforceable accountability mechanisms, these systems risk eroding the very foundations of a democratic society.
The Pre-Crime Surveillance State
The expansion of “predictive policing” into public services represents a dangerous move towards a surveillance state. The scope of “predictive policing” is not only limited to the criminal legal system. The Government is expanding algorithmic, automated and data-based systems into spaces of healthcare, education and welfare as well.
Research conducted by Medact on the Prevent Duty in healthcare evidenced how health workers are required to identify and report those who they believe are “at risk” of being drawn into terrorism. This risks undermining therapeutic relationships, confidentiality and trust in medical practitioners and expands the role of policing and counter-terror into healthcare.
Those targeted by these kinds of systems are not afforded the right to be presumed innocent until proven guilty. Instead, they are profiled, risk-scored and surveilled based on where they live or what flawed data says about them or who they associate with.
This is how a surveillance state embeds itself into the everyday. Without committing a crime, you can be branded a threat; without access to redress, you can be punished; and without transparency, you may never know it happened.
But the rise of pre-crime policing is not inevitable – it is a political choice. That is why we must take a stand and call on the government to ban “predictive policing” systems once and for all.
Beyond ‘predictive’ policing, towards community safety
The failures of “predictive” policing have been well documented – from reinforcing racist policing to undermining human rights. But rejecting these technologies does not mean giving up on public safety. On the contrary, it means shifting resources and attention to solutions that are proven to work, that respect human rights and that are based on trust, not fear. This means investing in secure housing, mental health services, youth centers and community based support services for people experiencing hardship or distress. If safety is the goal – prevention not prediction should be the priority.
Ban Crime predicting police tech
Crime predicting’ AI doesn’t prevent crime – it creates fear and undermines our fundamental right to be presumed innocent.
Sign the petition