UN Special Rapporteur on Minority Issues
[DRAFT FOR GLOBAL CONSULTATION]
8. SMCs have a responsibility to provide content policies in the various languages
used by their community members, in particular those languages that SMCs
function in. Content policies on hate speech must be especially accessible to
linguistic minorities at risk of violence or incitement to hatred or hostility.
Commentary
When SMCs make their services available in a language and thus to speakers of those languages, they
have an increased responsibility to make content policies accessible in the same languages. There is
often a disparity between the number of languages that SMCs accommodate so as to be commercially
advantageous and the translation of content policies and moderation which takes place in far fewer
languages which is resource intensive in developing AI systems and assigning increased human
moderators which is thus commercially disadvantageous.
Choosing which languages, the content policies are translated into is normally based on scale of use
and commercial considerations. This means that it will often be minorities who are excluded from
accessing content policies as they will, by and large, also constitute numerical minorities and may even
privilege majority populations. The responsibility might be lesser, but still SMCs could have the
additional responsibility to remove spoken languages that are expressed through the script of other
languages.
9. Transparency reports should provide data on all content moderation relating to
hate speech and minorities. This should be disaggregated in a manner such that
those protected groups or minorities most at risk or under threat should be
discernible and States and regions in question. It should not be limited to just
content removals but should be across the range of responses taken.
Commentary
SMCs are increasingly issuing periodic transparency reports carrying data on removal of violating
content across categories of harmful content. While these show the number of removals for violating
policies such as hate speech, incitement and dangerous organisations and individuals, they do not show
which protected groups or minorities were targeted the most or the least and which States this data
relates or the division of languages the content was posted in. Being transparent with regards to those
who are most at risk and where they are located can play a vital role publicly demonstrating where and
why resources need to be allocated to mitigated escalations and severity of harms.
It can also be relied on as a vital advocacy tool by civil society organisations and spur concerned States
to take concrete policies to address such societal issues. Simultaneously it can also instil and inspire
public and governmental confidence and trust in the relevant SMC in genuinely being concerned about
community members who used the service and the potential harm to society that can occur. In the worst
of cases, it can allow for an allocation of increased human moderators and the limiting or cessation of
services to prevent the incitement and organisation of mass violence, up to the level of genocide.
Disaggregating moderation data along the lines of perpetrators, terms used, prevalent languages, type
of hate speech and that which targets minorities as well as severity of hate speech can all considerably
improve in mapping where, against who, by whom and severity of hate speech to encourage
collaboration thinking action to address the issue. The range of responses taken should also be listed
14