A/HRC/56/68
representative. Decisions about the parameters and functioning of an algorithm can introduce
biases. Algorithm designers make decisions about which variables an algorithm will use, how
to define categories or thresholds for sorting information and what data will be used to build
the algorithm. The choices made by designers include how to measure specific features and
define algorithmic success. Sometimes, the backgrounds or perspectives of algorithm
designers may cause them to embed unconscious biases, including racial biases, in their
algorithm designs. 13 This lack of diversity in digital technology sectors is reportedly
exacerbated by the absence of inclusive consultation processes in the development of
artificial intelligence systems, which contributes to algorithmic design issues. 14
18.
Algorithmic design choices can have significant discriminatory impacts in real life.
For example, when building a loan risk assessment algorithm, the way in which “risk” is
defined and measured may lead to discriminatory results. If an algorithm designer decides to
use credit scores as a proxy for risk, there could be discriminatory outcomes for groups of
people who tend to have lower credit scores. Research has shown that there can be a strong
correlation between credit score, race and other demographic indicators and that the use of
credit scores disadvantages certain groups.15 That correlation can, in many cases, be seen as
a by-product of existing systemic racism and exclusion. Individuals may be disadvantaged
by the choice made by an algorithm designer to use credit scores to assess loan risk, despite
it ostensibly not being a discriminatory criterion.
3.
Use for discriminatory purposes
19.
Artificial intelligence can, in some cases, be used for explicitly racist purposes through
its selective deployment against targeted groups, resulting in discriminatory outcomes. For
example, there are reports of law enforcement agencies intentionally using artificial
intelligence to survey and overpolice particular communities, along racially discriminatory
lines. 16 Furthermore, intentional discrimination can occur when Governments and others
exploit the technology’s capabilities to monitor, profile and target specific groups or
individuals on the basis of their racial or ethnic identities.17
20.
The spread of disinformation is another way in which artificial intelligence can be
used for explicitly racist purposes. Political actors can use artificial intelligence to generate
texts, images and videos to manipulate public opinion and political processes in their favour
and undermine trust in institutions, including along racial lines. Governments are also
reported to have used artificial intelligence to sow discord and facilitate online censorship.18
4.
Accountability problems
21.
The fact that some artificial intelligence tools make decisions independently of
humans means that the decision-making process is hidden, as if in an opaque “black box”. In
addition, an algorithm might make decisions independently because, once exposed to data,
artificial intelligence algorithms are constantly updating themselves. Over time, an artificial
intelligence tool may use, in its decision-making, factors on which it was not originally
programmed to rely. Instead, these factors come from patterns that it has itself identified in
the data. As the algorithm incorporates these new patterns into its code and decision-making,
individuals relying on the algorithm may no longer be able to “look under the hood” and
pinpoint the criteria that the algorithm has used to produce certain outcomes. Thus, the “black
13
14
15
16
17
18
GE.24-08849
Ninareh Mehrabi and others, “A survey on bias and fairness in machine learning”, ACM Computing
Surveys, vol. 54, No. 6 (2022); The London Story submission; and A/HRC/44/57, para. 17.
NetMission.Asia submission.
A.R. Lange and Natasha Duarte, “Understanding bias in algorithmic design”, Medium, 6 September
2017.
See Amnesty International, Decode Surveillance NYC: Methodology (London, 2022); and
NetMission.Asia submission.
NetMission.Asia submission.
Tate Ryan-Mosley, “How generative AI is boosting the spread of disinformation and propaganda”,
MIT Technology Review, 4 October 2023.
5