A/HRC/56/68
Predictive policing can also reflect aspects of the “black box” problem, as the algorithms lack
transparency, including about what data are analysed and how the predictions are used. 45
32.
Location-based predictive policing algorithms draw on links between places, events
and historical crime data to predict when and where future crimes are likely to occur.46 Police
forces then plan their patrols accordingly. When officers in overpoliced neighbourhoods
record new offences, a feedback loop is created, whereby the algorithm generates
increasingly biased predictions targeting these neighbourhoods. In short, bias from the past
leads to bias in the future. In the United Kingdom of Great Britain and Northern Ireland, a
Government-commissioned study of algorithmic bias in policing showed that identifying
geographical locations as “hotspots” for crime could prime officers to expect more crime in
those areas. As a result, the officers were more likely to stop or arrest people in “hotspots”
on the basis of bias than on the basis of genuine public safety imperatives.47 In Uruguay,
researchers have found that data used in location-based predictive policing algorithms could
be biased. The location variable could function as a proxy for socioeconomic or ethnic
background, triggering discrimination.48
33.
Person-based predictive policing tools provide a way of predicting who might commit
a future crime on the basis of background data about individuals. Background data can
include a person’s age, gender, marital status, history of substance abuse and criminal record.
As with location-based tools, past arrest data, which are often tainted by systemic racism in
the criminal justice system, can skew the future predictions of those algorithms. The use of
variables such as socioeconomic background, education level and location can act as proxies
for race and perpetuate historical biases.49 In Australia, the New South Wales Police Force
used the algorithm-based Suspect Target Management Plan to identify individuals at risk of
committing criminal offences. Its use reportedly led to a disproportionately high number
police interactions with members of Aboriginal and Torres Strait Islander communities
before it was discontinued.50
(c)
Recidivism assessment algorithms
34.
Recidivism assessment tools are used to inform decisions at different stages of the
criminal justice system, including about bail, bond, sentencing and parole. 51 Recidivism
assessment tools use historical data to assess defendants’ likelihood of acting in certain ways,
in particular whether they are likely to commit a new crime in the future. The tools produce
risk scores, using information from sources such as criminal records and defendant surveys. 52
35.
Recidivism prediction tools exhibit multiple artificial intelligence challenges that
contribute to racial discrimination. First, the tools have data challenges. The criminal justice
system data used to train their algorithms reflect systemic inequities based on a history of
racist policing behaviour.53 In addition, design choices, such as how variables are measured
or assessed, can contribute to algorithmic discrimination.54 Moreover, the way in which an
45
46
47
48
49
50
51
52
53
54
GE.24-08849
Lau, “Predictive policing explained”.
Will Douglas Heaven, “Predictive policing algorithms are racist. They need to be dismantled”, MIT
Technology Review, 17 July 2020.
Ibid. See also Government of the United Kingdom of Great Britain and Northern Ireland, “Report
commissioned by CDEI calls for measures to address bias in police use of data analytics”,
16 September 2019.
Juan Ortiz Freuler and Carlos Iglesias, “Algorithms and artificial intelligence in Latin America:
a study of implementation by governments in Argentina and Uruguay”, World Wide Web
Foundation, September 2018; and Eticas Foundation, “Uruguay’s Ministry of the Interior invests in
predictive policing”, 13 September 2021.
Heaven, “Predictive policing algorithms are racist”.
Australian Human Rights Commission submission.
Julia Angwin and others, “Machine bias”, ProPublica, 23 May 2016.
Ibid.
See Heaven, “Predictive policing algorithms are racist”; and Michael Mayowa Farayola and others,
“Fairness of AI in predicting the risk of recidivism: review and phase mapping of AI fairness
techniques”, in Proceedings of the 18th International Conference on Availability, Reliability and
Security (Association for Computing Machinery, 2023).
Mehrabi and others, “A survey on bias and fairness in machine learning”.
9