PROTECTING MINORITY RIGHTS – A Practical Guide to Developing Comprehensive Anti-Discrimination Legislation
Wider discriminatory impacts of algorithmic systems and artificial intelligence
The use of algorithmic decision-making and artificial intelligence can lead to discrimination in various
ways.1220 Two well-documented patterns are: (a) the opaque mass collection of personal data and the
use of that data to train algorithmic systems in harmful ways; for example, systems used by social media
platforms operate by collecting personal data and information about the user and using that information
to target content to them; and (b) the use of technologies in ways that lead to discriminatory results if
the system “learns” from discriminatory data and reproduces that bias – an effect that is often referred
to by data scientists as “garbage in, garbage out”.1221
The discriminatory impacts of the second pattern are evident in surveillance and policing. For example, in
a 2016 study, the Human Rights Data Analysis Group demonstrated that the use of the predictive policing
tool PredPol in Oakland, California, would reinforce racially biased police practices by recommending
increased police deployment in areas with higher populations of non-white and low-income residents.1222
Similarly, a test conducted by the American Civil Liberties Union in July 2018 found that the facial
recognition tool Rekognition incorrectly matched 28 Members of Congress, identifying them as persons
who had been arrested for a crime.1223 The false matches were disproportionately of people of colour,
including six members of the Congressional Black Caucus.
The examples provided here are at the tip of the iceberg, with a full analysis of the discriminatory impact
of the use of algorithms being beyond the scope of the present guide. However, the role of comprehensive
anti-discrimination law in addressing these harms is key. It is critical that both private and public actors
are bound by legal obligations requiring them to ensure that the use of algorithmic systems does not
discriminate, directly or indirectly, and that such systems are not used to exacerbate other forms of
prohibited conduct, including harassment and hate speech.
It is also vital that an equal rights approach is adopted in the design and development of such technologies.
Specifically, carrying out an equality impact assessment must be a basic requirement for design, roll-out
and monitoring of all algorithmic systems. Such an assessment must be substantive and meaningful,
incorporating consideration of the actual or potential discriminatory effects of using algorithmic systems
through consultation with groups that are at risk of experiencing such effects. The essential need for a
“mandatory approach” to equality impact assessment was emphasized by the Special Rapporteur on
contemporary forms of racism, racial discrimination, xenophobia and related intolerance in the report on
racial discrimination and emerging digital technologies submitted by the mandate holder to the Human
Rights Council in 2020.1224
188
1220
See, among others, Frederik Zuiderveen Borgesius, Discrimination, Artificial Intelligence, and Algorithmic Decision-Making
(Strasbourg, Council of Europe, 2018). Available at https://rm.coe.int/discrimination-artificial-intelligence-and-algorithmic-decisionmaking/1680925d73. See also Solon Barocas and Andrew D. Selbst, “Big data’s disparate impact”, California Law Review, vol. 104
(2016).
1221
Vincent Southerland, “With AI and criminal justice, the devil is in the data”, American Civil Liberties Union, 9 April 2018. Available at
www.aclu.org/issues/privacy-technology/surveillance-technologies/ai-and-criminal-justice-devil-data.
1222
Kristian Lum, and William Isaac, “To predict and serve?”, Significance, vol. 13, No. 5 (2016). Available at https://rss.onlinelibrary.wiley.
com/doi/epdf/10.1111/j.1740-9713.2016.00960.x.
1223
Jacob Snow, “Amazon’s face recognition falsely matched 28 Members of Congress with mugshots”, American Civil Liberties Union,
26 July 2018. Available at www.aclu.org/blog/privacy-technology/surveillance-technologies/amazons-face-recognition-falsely-matched-28.
The author noted that 11 of the 28 false matches misidentified people of colour (approximately 39 per cent), including civil rights leader
John Lewis and 5 other members of the Congressional Black Caucus. Only 20 per cent of current members of Congress are people of
colour, which indicates that false-match rates affected members of colour at a significantly higher rate.
1224
A/HRC/44/57, para. 56.