The Ethics of Predictive Policing: Where Data Science Meets Civil Liberties
Algorithms: The data is fed into complex algorithms that analyze historical crime trends and identify areas with a high likelihood of future criminal activity. These algorithms can be categorized into two main approaches: “hotspot policing” and “predictive crime modeling” [6].
-
Hotspot Policing: This method focuses on identifying geographical areas with a history of high crime rates. By analyzing past crime data, algorithms can pinpoint hotspots where police presence may be most effective in deterring criminal activity.
-
Predictive Crime Modelling: This approach takes crime forecasting a step further by attempting to predict the likelihood of specific crimes occurring at particular times and locations. These models go beyond location and consider additional factors such as time of day, weather conditions, and even historical data on repeat offenders.
The Algorithmic Bias Problem
Predictive policing’s potential benefits are undeniable. However, ethical concerns loom large, particularly with regard to the issue of algorithmic bias. These biases can be embedded within the algorithms themselves or stem from the data they are trained on [7]:
- Data Bias: The algorithms used rely on historical crime data, which may cause the internalization22 of human/ social biases. For example, if certain communities are over-policed, their residents are more likely to be arrested, creating a skewed data set that reinforces the perception of higher crime rates in those areas. This can lead to a self-fulfilling prophecy, where police presence is disproportionately concentrated in these communities, perpetuating the cycle of over-policing.
- Algorithmic Bias: The algorithms themselves may contain biases depending on how they are designed and programmed. For example, if factors like race or socioeconomic status are included in the data analysis, the algorithms could inadvertently associate these factors with criminal activity, leading to discriminatory outcomes.
The consequences of algorithmic bias in predictive policing can be serious. Minority communities may be subjected to increased surveillance and police presence, even if they don’t have a higher actual crime rate. These mistakes can also divert police resources – based on biased predictions – away from areas with genuine crime problems.
This isn’t just speculative.
A 2016 ProPublica investigation revealed that a widely used predictive policing algorithm in Chicago disproportionately flagged black residents for potential future crimes [8]. This highlights the very real dangers of perpetuating racial biases through algorithmic decision-making.
Privacy and Civil Liberties Concerns
The benefits of proactive crime prevention through predictive policing come at a cost – the potential erosion of privacy and civil liberties. Predictive policing relies on the collection and analysis of vast amounts of personal data. Concerns around intrusive surveillance and the potential misuse of this information should not be ignored. Individuals may feel a constant sense of being monitored, leading to a negative effect on free movement and expression.
The focus on pre-crime prediction could also lead to a shift away from traditional law enforcement practices that rely on concrete evidence and due process. People could end up being flagged for potential ‘criminal’ activity based solely on decisions made by algorithms. This could lead to increased stops, frisks, and arrests without probable cause, again disproportionately impacting marginalized communities and undermining fundamental rights.
The social and ethical impacts of predictive policing go beyond individual privacy though.
Knowing that they might be flagged by algorithms, people may be less likely to engage in certain activities, even perfectly lawful ones. Peaceful protests or gatherings in high-crime areas identified by algorithms could be viewed with increased scrutiny, stifling free assembly.
Finding the right balance between public safety and individual liberties is crucial. While predictive policing could help reduce crime, it will be essential to balance its implementation with society’s fundamental rights and liberties.
Seeking Solutions: Mitigating Bias and Ensuring Fairness
A critical first step towards responsible use is ensuring the data used to train predictive policing algorithms is comprehensive and representative of the population. Skewed data sets perpetuate existing biases.
For instance, including data from social services alongside crime statistics could provide a more holistic picture of a community, helping to identify underlying social issues that contribute to crime rates.
Regularly auditing algorithms for bias is also crucial [9]. This involves analysing the algorithms’ decision-making processes to identify and address any discriminatory outcomes. Independent oversight bodies could be established to conduct these audits, fostering transparency and accountability within law enforcement agencies.
Promoting transparency and public engagement will be crucial for fostering trust and legitimacy in the use of predictive policing technologies [7].
Law enforcement agencies should openly disclose how predictive policing algorithms are used and what data is collected. Clear and accessible communication with the public is essential for building trust and addressing concerns.
Additionally, exploring alternative crime prevention strategies more centered around community policing initiatives could complement these efforts. Focusing on addressing the root causes of crime, such as poverty and lack of opportunity, can lead to more sustainable solutions that don’t rely solely on data-driven predictions.
This will help re-humanise policing efforts and justify the use of data-driven strategies.