A/78/538
investment in content moderation by companies tends to be highly unequal between
different countries, with significant underinvestment in countries in the global South.
Users from the global South can report hateful speech multiple times without action
being taken, as was the case in Myanmar. 57 In 2018, the Chief Executive Officer of
Facebook testified to the United States Senate that Facebook’s artificial intelligence
systems had been unable to detect hate speech in the context of Myanmar
(A/HRC/44/57, para. 24). In its submission, Amnesty International also highlighted
that in the midst of the conflict in Myanmar in 2017, the region only had five
Burmese-speaking content moderators. The organization also reported tha t
weaknesses in detecting anti-Rohingya and anti-Muslim hate speech persist on the
platform, notwithstanding some efforts to make improvements following the
atrocities.
Race-neutral approaches
53. Another major challenge that the Special Rapporteur has identified in
addressing online racist hate speech is the race-neutral approach to the design,
development and governance of digital technologies. As highlighted by the Special
Rapporteur’s predecessor:
The public perception of technology tends to be that it is inherently neutral and
objective, and some have pointed out that this presumption of technological
objectivity and neutrality is one that remains salient even among producers of
technology. But technology is never neutral – it reflects the values and interests
of those who influence its design and use, and is fundamentally shaped by the
same structures of inequality that operate in society (ibid., para. 12).
The assumption about the neutrality of digital technologies and the absence of an
approach that explicitly addresses the ability of such technologies to replicate and
exacerbate racial and ethnic inequalities in and between societies make it challenging
to effectively address online racist hate speech. Such trends can be exacerbated by
the lack of racial and ethnic diversity within the technology sector and among those
who design algorithms that determine content dissemination and moderation, as well
as guidelines and policies relating to online racist hate speech (ibid., para. 17).
54. In his 2021 report, the Special Rapporteur on minority issues highlighted that
even though minorities are the most affected by online hate speech, including that
which would meet the threshold for incitement, their experiences and perspectives are
not explicitly recognized within efforts to address the phenomenon. Instead, the
“extent and brutality of hate speech is ignored, even camouflaged in a fog of
generalities” (A/HRC/46/57, para. 22). The Special Rapporteur echoes such concerns
in relation to the experiences and perspectives of those targeted by online racist hate
speech. She is not aware of many initiatives that enable the meaningful participation
of people from the most affected groups in the design, development and governance
of digital platforms or initiatives at the national and international levels to prevent
and address online racist hate speech.
Deep societal drivers of online racist hate speech
55. While the drivers of online racist hate speech are inadequately understood, as
elaborated on below, the phenomenon does not occur in a vacuum. Societal trends can
be identified that contribute to a climate in which online racist hate speech flourishes.
Economic inequality and interrelated trends in political dissatisfaction and
disenfranchisement have been identified as factors that can drive online racist hate
__________________
57
18/22
Ibid.
23-20290