A/HRC/56/68
interrelated system of laws, policies, practices and attitudes in State institutions, the private
sector and societal structures that, combined, result in direct or indirect, intentional or
unintentional, de jure or de facto discrimination, distinction, exclusion, restriction or
preference on the basis of race, colour, descent or national or ethnic origin”. 9 As reflected in
that definition, systemic racism is a complex, often insidious and society-wide phenomenon.
Manifestations of systemic racism in one domain are interrelated, interdependent and
mutually reinforcing with those in others. Looking at the cross-cutting ways in which
artificial intelligence contributes to racial discrimination can help to identify the ways in
which it interacts with and reinforces manifestations of systemic racism and holistically
reinforces systemic oppression in society along racial and ethnic lines. 10
1.
Data problems
13.
The rise of artificial intelligence systems and machine learning algorithms has led to
the digitization of data on a massive scale. Algorithms use those data to make decisions and
engage in actions across several sectors. However, the data sets on which algorithms are
trained are often incomplete or underrepresent certain groups of people. If particular groups
are over- or underrepresented in the training sets, including along racial and ethnic lines,
algorithmic bias can result. Similarly, if the training sets include already biased data, they
can produce biased outcomes.
14.
If the training data are insufficient, the algorithms may make predictions that are
systematically discriminatory for groups that are unrepresented or underrepresented in the
data. Not only can algorithmic bias occur with too little data; algorithms based on
unrepresentative data can also produce skewed outcomes. For instance, a study focused on
law enforcement image databases in the United States showed that people of African descent
were more likely to be erroneously singled out in facial recognition networks used by law
enforcement officers. This was due to errors in facial identification for that group and the
overrepresentation of people of African descent in police photograph databases, which
reflects historical patterns of systemic racism.11
15.
Historical biases can affect the data themselves. A core element of machine learning
is making predictions about the future on the basis of data from the past. However, if past
data are biased against certain groups, including along racial and ethnic lines, the computer
models can reproduce and amplify those biases. The use of biased or flawed data to inform
real life decisions can further target and harm marginalized racial and ethnic groups because
the use of those data in the context of artificial intelligence creates more data, which are then
used to inform future decisions. Such self-reinforcing systems can replicate and deepen
existing disparities.
16.
The final issue with data is privacy. The data used in artificial intelligence systems
often include the personal information of the individuals to whom the data belong. The
collection and processing of data without consent violates the right to privacy. There are also
incidents of data collected in one setting, such as health care, including through the use of
health-care applications, being shared, without consent, for use in others, such as for law
enforcement purposes. Data breaches and unauthorized access to personal information
through hacking pose additional privacy concerns. For those from racially marginalized
groups, human rights concerns relating to the right to privacy can be amplified. Privacy
violations can put those groups at risk of ostracization, discrimination or physical danger.12
2.
Algorithm design problems
17.
A second common form of bias in artificial intelligence tools arises from the way in
which algorithms are designed. If bias is embedded in design choices, an algorithm can
contribute to biased outcomes, even if the data fed into the algorithm are perfectly
9
10
11
12
4
A/HRC/47/53, para. 9.
A/HRC/44/57, para. 43.
Nicol Turner Lee, Paul Resnick and Genie Barton, “Algorithmic bias detection and mitigation: best
practices and policies to reduce consumer harms”, Brookings Institution, 22 May 2019.
Samantha Lai and Brooke Tanner, “Examining the intersection of data privacy and civil rights”,
Brookings Institution, 18 July 2022. See also Privacy International submission.
GE.24-08849