A/HRC/56/68
box” problem makes the artificial intelligence reasoning process insidious and opaque. 19 In
addition, many algorithms developed by business entities cannot be scrutinized because of
contract and intellectual property laws, exacerbating accountability issues. 20
22.
The “black box” problem has particularly concerning implications in the context of
systemic racism. As described above, systemic racism is an insidious but deeply destructive,
society-wide scourge. The forces driving systemic racism are not always recognized, a
phenomenon that is exacerbated by persistent gaps in the collection of racially and ethnically
disaggregated data. 21 Artificial intelligence, without effective accountability mechanisms,
has significant capacity to be an additional driver of the already insidious and destructive
phenomenon of systemic racism.
23.
Artificial intelligence accountability issues have significant implications for the
ability of those who experience acts of racial discrimination to seek effective remedies.
Today, when those from marginalized racial and ethnic groups experience different outcomes
because of human decision-making, courts and other accountability mechanisms can examine
whether the actions were intentional and justifiable.22 When people are the decision-makers,
there is often evidence that can be used to make such assessments. In many cases,
autonomous decision-making processes do not create evidentiary trails in the same way as
human decision makers.23 “Black box” issues will exacerbate the already significant barriers
in access to justice for those who experience racial discrimination.
B.
Use of artificial intelligence and its discriminatory impact
24.
In the present section, the Special Rapporteur provides examples of the uses of
artificial intelligence across different societal domains and its racially discriminatory
impacts. These examples are illustrative and non-exhaustive and are provided as clear
evidence that artificial intelligence is already contributing to racial discrimination. The
Special Rapporteur perceives these examples as interconnected and mutually reinforcing
manifestations of racial discrimination, which contribute to the holistic reinforcement of
systemic, society-wide oppression, along racial and ethnic lines.
25.
The Special Rapporteur has chosen three domains to exemplify the discriminatory
impact of artificial intelligence: law enforcement, security and the criminal justice system;
education; and health care. In relation to the use of artificial intelligence in other contexts,
the Special Rapporteur recommends consulting the reports of the previous mandate holder
on the rise of digital borders and mapping racial and xenophobic discrimination in digital
border and immigration enforcement and on the use of digital technologies in border and
immigration enforcement.24 The Special Rapporteur also refers readers to her report to the
General Assembly, at its seventy-eighth session, on online racist hate speech, which
addresses the use of artificial intelligence in social media content moderation, 25 and to the
report of the Special Rapporteur on extreme poverty and human rights to the General
Assembly at its seventy-fourth session, which provides an analysis of the use of artificial
intelligence in social protection systems.26
19
20
21
22
23
24
25
26
6
Yavar Bathaee, “The artificial intelligence black box and the failure of intent and causation”, Harvard
Journal of Law and Technology, vol. 31, No. 2 (2018); A/HRC/44/57, para. 34; and Renata M.
O’Donnell, “Challenging racist predictive policing algorithms under the Equal Protection Clause”,
New York University Law Review, vol. 94, No. 3 (June 2019).
A/HRC/44/57, para. 44.
A/HRC/47/53, para. 16.
Bathaee, “The artificial intelligence black box”.
Ibid.
A/75/590 and A/HRC/48/76.
A/78/538.
A/74/493.
GE.24-08849