A/HRC/46/57
media.12 The results have been the driving force behind an explosion of hate, radicalization,
dehumanization, scapegoating, incitement to genocide and advocacy of hatred that
constitutes incitement to violence, hostility or discrimination against minorities in social
media, leading to alarming increases in hate crimes and atrocities. 13 Reports widely confirm
that hate speech online has been linked to a global increase in violence toward minorities,
including mass shootings, lynchings and ethnic cleansing. 14 Hate pays, minorities suffer:
social media platforms are hugely profitable, while minorities increasingly experience hate
and incitement to violence through the platforms.
71.
Changes have begun, though belatedly, following a recognition of the scale of hate
speech targeting minorities in social media. As pointed out in submissions to the Special
Rapporteur, even when social media platforms have vetting policies in place, they may be
too slow, ineffective or broadly formulated. 15 Implementation often fails to protect the most
vulnerable from harm. Additionally, content posted by minorities, who are particularly
subjected to hate speech, demonized or scapegoated, is more frequently removed than content
posted by the majority containing discriminatory or racist speech. Numerous studies on hate
speech have noted that incidents against minorities are treated by the Government and law
enforcement as unimportant or as pranks, and casually dismissed. 16 Ironically, perpetrators
of incidents that reach the threshold of incitement to genocide or advocacy of hatred that
constitutes incitement to discrimination, hostility or violence are often not prosecuted or
punished in some countries, while, as noted in the Rabat Plan of Action (para. 11):
At the same time, members of minorities are de facto persecuted, with a chilling effect
on others, through the abuse of vague domestic legislation, jurisprudence and policies.
This dichotomy of (1) non-prosecution of “real” incitement cases and (2) persecution
of minorities under the guise of domestic incitement laws seems to be pervasive. Antiincitement laws in countries worldwide can be qualified as heterogeneous, at times
excessively narrow or vague. … [W]hile several States have adopted … policies, most
of them are too general, not systematically followed up, lacking focus and deprived
of proper impact assessments.
72.
One recent example of change reflecting the recognition of the scale of hate speech
targeting minorities, and the need to adopt an approach that reflects the particular
vulnerability of minorities to hate speech and the greater harm that they experience, is the
indication by Facebook that its broad definition of hate speech needs to take into account
those that are particularly targeted and subjected to harm. In 2020, Facebook was in the
process of altering its algorithms to prioritize the flagging of hate speech targeting minorities
such as persons of African descent, Muslims and Jews.
73.
Other examples abound of the dangers of algorithms being developed and the use of
artificial intelligence, which, in the absence of protocols and human rights impact
assessments that duly take into account the vulnerability and targeting of minorities in social
media, are prone to contribute to and accentuate the hate and harm experienced by minorities.
In January 2021, a South Korean chatbot driven by artificial intelligence called Lee Luda,
which had almost a million users, was taken down just a few weeks after its launch after
12
13
14
15
16
Guillaume Guichard, “Facebook a minimisé des initiatives internes visant à affaiblir les contenus
extrémistes”, Le Figaro, 27 May 2020.
According to the Commissioner for Human Rights of the Council of Europe, Dunja Mijatović, in her
annual activity report for 2019 (21 April 2020), “[a]ntisemitism, Islamophobia and anti-Gypsyism
have reached alarming levels. … Hate speech and crimes against Roma also [remain] widespread.”
Zachary Laub, “Hate speech on social media: global comparisons”, Council on Foreign Relations, 11
April 2019.
The European Commission, in its fourth round of monitoring of implementation of the European
Union code of conduct on countering illegal hate speech online, emphasized that the code
“complements [national] legislation fighting racism and xenophobia, which requires authors of illegal
hate speech offences – whether online or offline – to be effectively prosecuted.” See European
Commission, “How the Code of Conduct helped countering illegal hate speech online”, February
2019. Available at https://ec.europa.eu/info/sites/info/files/hatespeech_infographic3_web.pdf.
Mari J. Matsuda, “Public response to racist speech: considering the victim’s story”, Michigan Law
Review, vol. 87, No. 8 (August 1989).
13