UN Special Rapporteur on Minority Issues
[DRAFT FOR GLOBAL CONSULTATION]
Western or European States, international human rights law necessitates no such discrimination and in
fact require the allocation of moderation resources on the basis of risk of harm. This inadvertently
happens to be in less developed states from the global South. Indirect discrimination may also occur in
the absence of several languages in which Facebook allows use of its platforms, more so in languages
that it does not accommodate on its platform or even the moderation of others.
When an SMC opts to operate in a certain country, it undertakes to be responsible for the potential harm
it could cause to prospective users of its services. SMC’s merely due to resource implication or arbitrary
criteria facilitate automated and human content moderation for some linguistic groups and not others.
Furthermore, if only the majority languages are subjected moderation systems and enforcement of
content policies, then it will be mostly minorities who will be excluded which being some of the most
at risk of discriminatory treatment and violence.
This is exacerbated by even a smaller number of languages in which content policies are available. For
those languages in which content policies are available, it is not clear that community members in those
States or regions are aware of their rights to protection and complaint under content policies. For this
to be effective SMCs must run periodic public awareness and education campaigns, in particular on
their own platforms, and in all spoken languages, and if not that the number that covers the most people
and the most marginalised and vulnerable people so that their rights and protections offered by SMCs
are not illusory.
7. Human and automated moderation should be unbiased and involve consultation
with minorities. Ideally, human moderators should be from diverse backgrounds
and representative of protected groups and minorities.
Commentary
SMCs should ensure that automated artificially intelligent detection and removal systems are free from
bias of any kind against protected groups and especially minorities. This can be in the form of the types
of content they have been designed to identify. Pertinent questions are whether the lexicon of hate
speech terms is up to date and includes all protected groups and minorities. Such terms are constantly
changing and developing, and keeping AI systems effective and current, mechanisms must be created
for continuous consultation with those local linguistic and cultural knowledge, in particular members
belonging to minorities or minority civil society organisations.
Similarly, human moderation is the most effective means of content moderation but is the most resource
intensive and time consuming. However, there are situations where high levels of human moderation
can be dedicated to a particular language and country. It can however still miss important hate speech
if they are not culturally aware of the local context. A greater issue arises when the moderators are
linguistically competent and culturally aware, but themselves are consciously or unconsciously biased
and belong to the majority ethnic, religious or linguistic group or even themselves believe in hateful
tropes and stereotypes against minorities and protected groups. This should be combatted through
training that identifies and rectifies such biases but the most effective way to implement this guideline
is to strive to employ human moderators belonging to protected groups and minorities.
13