A/HRC/46/57
minorities. In extreme cases, they can even be used to propagate calls for genocide against
minorities. Thus, unregulated online expressions of hate can increase the chances of human
rights violations taking place offline against some of the most marginalised segments of
society.
At the same time, some internet companies are responding to pressure to remove online
content that is deemed harmful. This can occur through the imposition of intermediary
liability, the application of filters, as well automated tools. Some companies have also set
their own content standards in this regard. Frequently, however, these measures have the
purpose or effect of unlawfully, illegitimately and unnecessarily restricting the exercise of
human rights – especially freedom of opinion and expression – online and operate in the
absence of any meaningful oversight mechanisms. Furthermore, such measures may have a
disproportionate impact upon or even target individuals from marginalised groups,
particularly persons belonging to minorities (as well as political opponents, critics, and
human rights defenders), while limiting the possibilities for information-sharing, awarenessraising and advocacy for civil society organisations, human rights defenders and
representatives of persons belonging to minorities. Moreover, companies’ and social media
platforms’ online content moderation policies can lack transparency and any precise and
meaningful basis in international human rights standards, raising the possibilities that the
decisions made by these actors undermine the human rights of individuals, including those
belonging to minority groups.
Thus, the fundamental, two-pronged concern first raised by the Rabat Plan of Action in
October 2012 (A/HRC/22/17/Add.4) – that members of minorities are effectively persecuted
through the abuse of vague domestic legislation, jurisprudence and policies on ‘hate speech’,
whereas actual incidents which meet the threshold of incitement to discrimination, hostility
or violence under international human rights law are not addressed – has become an even
more pressing issue at the beginning of the new decade, one that requires effective and urgent
responses from States, social media platform owners, and other stakeholders which are based
on international human rights law and standards.
Freedom of expression and the essential communication tools and services provided by or
dependent on an open and accessible internet must be protected, as minorities and others who
are marginalised or vulnerable must be protected from hate speech, incitement to
discrimination, hostility or violence, and even calls to genocide.
Call for submissions
In accordance with the established practice of thematic mandate-holders, the Special
Rapporteur welcomes inputs by States, UN agencies, regional and international
organizations, national human rights institutions, civil society, scholars and research
institutions, private companies including those from the ICT sector, and others who may wish
to submit for this purpose. Such submissions may include, for instance, recommendations,
evidence and case studies. The following questions are intended to guide submissions:
1.
Please provide annual disaggregated data since 2017 if possible on hate speech in
social media, and in particular hate speech targeting minorities (national or ethnic, religious
and linguistic minorities). Please additionally indicate whether there are future plans to
include specifically disaggregated data on hate speech targeting minorities, considering that
in most countries, the victims of hate speech on social media are usually members of
minorities.
2.
Please identify the mechanisms and processes in place to remove, penalise or address
hate speech in social media targeting minorities. Please also specify and include any studies
or reports assessing their implementation and effectiveness.
3.
Please provide (legal and non-legal) examples of good practices of appropriate
responses developed by States, internet companies, civil society and other relevant
stakeholders to address online ‘hate speech’, including incitement to discrimination, hostility
or violence, against persons belonging to minorities. Please include assessments, if any, on
the effectiveness of these examples.
22