A study conducted by researchers from New York University revealed that existing biased and corrupt policing could also infect predictive policing software by introducing “dirty data” into the algorithm. This situation can be further exacerbated by the lack of transparency and public oversight over these algorithms.
Here is an excerpt from Jade McClain’s report on bias in predictive policing in Futurity:
Researchers identified 13 jurisdictions with documented instances of unlawful or biased police practices that have also explored or deployed predictive policing systems during the periods of unlawful activity.
The Chicago Police Department, for example, was under federal investigation for unlawful police practices when it implemented a computerized system that identifies people at risk of becoming a victim or offender in a shooting or homicide.
The study showed that the same demographic of residents the Department of Justice identified as targets of Chicago’s policing bias overlapped with those the predictive system identified. Other examples showed significant risks of overlap but because government use of predictive policing systems is often secret and hidden from public oversight, the extent of the risks remains unknown, according to the study.
“In jurisdictions that have well-established histories of corrupt police practices, there is a substantial risk that data generated from such practices could corrupt predictive computational systems. In such circumstances, robust public oversight and accountability are essential,” Schultz says.
“Even though this study was limited to jurisdictions with well-established histories of police misconduct and discriminatory police practices, we know that these concerns about policing practices and policies are not limited to these jurisdictions, so greater scrutiny regarding the data used in predictive policing technologies is necessary globally,” says lead author