A/HRC/44/57
Rapporteur on extreme poverty and human rights describes the rise of the digital welfare
state in countries in which systems of social protection and assistance are powered by
emerging digital technologies, and in ways that have severe negative human rights
implications for social welfare.12 As elaborated in a later section, the Special Rapporteur’s
assessment is that digital welfare states, as they exist today, are better described as
“discriminatory digital welfare states” because the trend is that they allow race and
ethnicity (among other grounds) to shape access to human rights on a discriminatory basis.
Urgent intervention is necessary to curb these discriminatory patterns.
10.
In the preparation of the report, the Special Rapporteur benefited from valuable
input from: expert group meetings hosted by the Global Studies Institute of the University
of Geneva, the Promise Institute for Human Rights at the University of California, Los
Angeles, (UCLA) School of Law and the UCLA Center for Critical Internet Inquiry;
research by the Harvard Law School Cyberlaw Clinic at the Berkman Klein Center for
Internet & Society, and the New York University School of Law Center on Race, Inequality
and the Law; interviews with researchers; and submissions received by a range of
stakeholders in response to a public call for submissions. Non-confidential submissions will
be available on the webpage of the mandate.13
II. Drivers of discrimination and inequality in emerging digital
technologies
11.
Any human rights analysis of emerging digital technologies must first grapple with
the social, economic and political forces that shape their design and use, and with the
individual and collective human interests and priorities at play that contribute to the racially
discriminatory design and use of these technologies.
12.
The public perception of technology tends to be that it is inherently neutral and
objective, and some have pointed out that this presumption of technological objectivity and
neutrality is one that remains salient even among producers of technology. But technology
is never neutral – it reflects the values and interests of those who influence its design and
use, and is fundamentally shaped by the same structures of inequality that operate in
society.14 For example, a 2019 review of 189 facial recognition algorithms from 99
developers around the world found that “many of these algorithms were 10 to 100 times
more likely to inaccurately identify a photograph of a black or East Asian face, compared
with a white one. In searching a database to find a given face, most of them picked
incorrect images among black women at significantly higher rates than they did among
other demographics.”15 There can no longer be any doubt that emerging digital technologies
have a striking capacity to reproduce, reinforce and even to exacerbate racial inequality
within and across societies. A number of important academic studies have shown
concretely that the design and use of technologies are already having this precise effect
across a variety of contexts.16 More research and funding are required to unpack fully how
even the inductive processes at the core of some artificial intelligence techniques, such as
machine learning, contribute to undercutting values such as equality and nondiscrimination.17
12
13
14
15
16
17
4
See A/74/493.
See www.ohchr.org/EN/Issues/Racism/SRRacism/Pages/Callinformationtechnologies.aspx.
Langdon Winner, The Whale and The Reactor: A Search for Limits in an Age of High Technology
(Chicago, University of Chicago Press, 1986), p. 29.
See www.scientificamerican.com/article/how-nist-tested-facial-recognition-algorithms-for-racial-bias.
See, e.g., Cathy O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and
Threatens Democracy (New York, Penguin, 2016); Ruha Benjamin, Race After Technology
(Cambridge, United Kingdom, Polity Press, 2019); and Safiya Noble, Algorithms of Oppression: How
Search Engines Reinforce Racism (New York, New York University Press, 2018).
See, e.g., Gabrielle M. Johnson, “Are algorithms value-free? Feminist theoretical virtues in machine
learning”, Journal of Moral Philosophy (forthcoming).