The attraction of predictive policing is that it incorporates a quantitative element into the field of law
enforcement. Doing so supposedly introduces mathematical rigor and a scientific basis into decisions
about further detention, investigation, sentencing, or parole.

But recent studies by researchers from Dartmouth College and other institutions have cast doubt upon
the efficacy of predictive policing. The Dartmouth study, in particular, said that the leading proprietary
predictive policing software only performed as well as untrained humans.

One issue that particularly rankles is false flagging which disadvantages black defendants more than
whites.

Eric Siegel, founder of PredictiveAnalyticsWorld writes:

Predictive models incorrectly flag black defendants who will not re-offend more often than they do for
white defendants. In what is the most widely cited piece on bias in predictive policing, ProPublica reports
that the nationally used COMPAS model (Correctional Offender Management Profiling for Alternative
Sanctions) falsely flags white defendants at a rate of 23.5%, and black defendants at 44.9%. In other
words, black defendants who don’t deserve it are erroneously flagged almost twice as much as
undeserving whites. To address this sort of disparity, researchers at Google propose an affirmative
action-like policy whereby a disenfranchised group is held to a more lenient standard (their interactive
demo depicts the case of flagging for loan defaults rather than future crime, but the same concept
applies).

In opposition, advocates of COMPAS counter that each flag is equally justified, for both races.
Responding to ProPublica, the creators of COMPAS point out that, among those flagged as higher risk,
the portion falsely flagged is similar for black and white defendants: 37% and 41%, respectively. In other
words, among defendants who are flagged, it is erroneous for white and black defendants equally often.
Others data scientists agree this meets the standard to exonerate the model as unbiased.

Siegel believes that the proprietary algorithms used in predictive policing software must be made transparent, more cognizant of biased ground truth, and rooted in societal context. “Crime-predicting models themselves must remain color blind by design, but the manner in which we contextualize and apply them cannot remain so.

Reintroducing race in this way is the only means to progress from merely screening predictive models
for racial bias to intentionally designing predictive policing to actively advance racial justice,” Siegel
adds.