A/HRC/44/57
remediate this racial discrimination, and private actors, such as corporations, have related
responsibilities to do the same.
6.
Among emerging digital technologies, the Special Rapporteur focuses in the report
on networked and predictive technologies, many involving big data and artificial
intelligence, with some emphasis on algorithmic (and algorithmically assisted) decisionmaking. Much of the existing human rights analysis of racial discrimination and emerging
digital technologies has shed light on a specific set of issues: online hate incidents and the
use of digital platforms to coordinate, fund and build support for racist communities and
their activities. In the report, the Special Rapporteur goes a step further, bringing racial
equality and non-discrimination principles to bear on the structural and institutional impacts
of emerging digital technologies, which researchers, advocates and others have identified as
alarming. Among the concerns is the prevalence of emerging digital technologies in
determining everyday outcomes in employment, education, health care and criminal justice,
which introduces the risk of systemized discrimination on an unprecedented scale. A recent
report from the European Union Agency for Fundamental Rights highlights examples of
these concerns in the European Union and provides valuable recommendations for the
required response.4
7.
As “classification technologies that differentiate, rank, and categorize”, artificial
intelligence systems are at their core “systems of discrimination”.5 Machine-learning
algorithms reproduce bias embedded in large-scale data sets capable of mimicking and
reproducing implicit biases of humans, even in the absence of explicit algorithmic rules that
stereotype.6 Data sets, as a product of human design, can be biased due to “skews, gaps, and
faulty assumptions”.7 They can also suffer from “signal problems”, demographic non- or
under-representation because of the unequal ways in which data were created or collected.8
In addition to inaccurate, missing and poorly represented data, “dirty data” include data that
have been manipulated intentionally or distorted by biases.9 Such data sets potentially lead
to discrimination against or exclusion of certain populations, notably minorities along
identities of race, ethnicity, religion and gender.
8.
Even where discrimination is not intended, indirect discrimination can result from
using innocuous and genuinely relevant criteria that also operate as proxies for race and
ethnicity. Other concerns include the use of and reliance on predictive models that
incorporate historical data – data often reflecting discriminatory biases and inaccurate
profiling – including in contexts such as law enforcement, national security and
immigration. At a more fundamental level, the design of emerging digital technologies
requires developers to make choices about how to best achieve their goals, and those
choices will result in different distributional consequences. 10 A core concern of the Special
Rapporteur in the report is with such choices that disparately affect the human rights of
individuals and groups on the basis of their race, ethnicity and related grounds.
9.
With respect to class in particular, research shows that even where policymakers,
civil servants and scientists have pursued automated decision-making with an intention to
make more efficient and more fair decisions, the systems they used to achieve these ends
have been shown to reinforce inequality, and result in punitive outcomes for persons living
in poverty.11 Given that racially and ethnically marginalized communities often
disproportionately live under conditions of poverty, equality and non-discrimination
principles should be central to human rights analyses of emerging digital technologies for
social welfare and other socioeconomic systems. An important recent report by the Special
4
5
6
7
8
9
10
11
See https://fra.europa.eu/en/publication/2018/bigdata-discrimination-data-supported-decision-making.
Sarah Myers West, Meredith Whittaker and Kate Crawford, “Discriminating systems: gender, race
and power in AI” (New York, AI Now Institute, 2019), p. 6.
See https://philmachinelearning.files.wordpress.com/2018/02/gabbriellejohnson_algorithmic-bias.pdf.
See https://foreignpolicy.com/2013/05/10/think-again-big-data.
Ibid.
See https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3403010.
See https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2477899.
Virginia Eubanks, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor
(New York, Picador, 2018).
3