A/HRC/44/57
III. Examples of racial discrimination in the design and use of
emerging digital technologies
A.
Explicit intolerance and prejudice-motivated conduct
24.
Actors seeking to spread racist speech and incitement to discrimination and violence
have relied on emerging digital technologies, with social media platforms playing a pivotal
role. The Special Rapporteur has highlighted these trends in previous reports on neo-Nazi
and other white supremacist groups that rely on social media platforms to recruit, raise
funds and coordinate.47 Another prominent example of explicitly prejudice-motivated use of
emerging digital technologies is the use of Facebook by radical nationalist Buddhist groups
and military actors in Myanmar to exacerbate discrimination and violence against Muslims
and the Rohingya ethnic minority in particular.48 In 2018, the Chief Executive Officer of
Facebook, Mark Zuckerberg, testified to the United States Senate that Facebook’s artificial
intelligence systems were unable to detect hate speech in such contexts.49 These are not the
only instances: a submission also highlighted the use of Facebook to amplify discriminatory
and intolerant content, including content inciting violence against religious and linguistic
minority groups in India.50
25.
Social media bots – automated accounts – have been used to shift political discourse
and misrepresent public opinion. Out of a sample of 70 countries, bots were used in 50
countries for social media manipulation campaigns in 2019. 51 For groups that rely on
emerging digital technologies as a strategy for promoting racial, ethnic and religious
discord and intolerance, bots are central to their capacity to spread racist speech or
disinformation online. Examples suggest that the coordinated use of bots has been
especially prevalent before elections. For example, leading up to the Swedish election in
2018, researchers identified 6 per cent of Twitter accounts discussing national politics as
bots, which posted about topics related to immigration and Islam more than genuine
accounts.52 Similarly, in the period before the 2018 election in the United States, 28 per
cent of Twitter accounts posting antisemitic tweets were bots, which posted 43 per cent of
all antisemitic tweets.53 Emerging digital technologies in the Russian Federation have been
used to promote ethnic and racial divisions on social media, 54 through hundreds of falsified
online personas and pages on Twitter, Facebook and other social media sites. Although
some posts were directed towards ethnic minority groups and called for racial equality,
many denounced such groups in an effort to promote racial tensions. Some personas
supported white nationalist groups, prompting discrimination and violence against racial
minorities.55
B.
Direct or indirect discriminatory design/use of emerging digital
technology
26.
The design and use of emerging digital technologies can directly and indirectly
discriminate along racial or ethnic lines in access to a range of human rights.
47
48
49
50
51
52
53
54
55
8
See A/73/312 and A/HRC/41/55.
A/HRC/42/50, paras. 71–75.
See www.commerce.senate.gov/2018/4/facebook-social-media-privacy-and-the-use-and-abuse-ofdata.
Submission by Avaaz.
See https://comprop.oii.ox.ac.uk/wp-content/uploads/sites/93/2019/09/CyberTroop-Report19.pdf.
See www.semanticscholar.org/paper/Political-Bots-and-the-Swedish-General-Election-FernquistKaati/2af3d1e16d5553dc489d8b44321ea543d571a4a9.
See www.adl.org/resources/reports/computational-propaganda-jewish-americans-and-the-2018midterms-the-amplification.
See https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3304223.
Ibid., p. 180.