UN Special Rapporteur on Minority Issues [DRAFT FOR GLOBAL CONSULTATION] interests. However, it also created an environment where provocative hate speech often elicited higher levels of engagement and allowed a digital climate where extreme, intolerant or violent elements gravitated towards each other and became further radicalised. This, the argument continued, maximised community member engagement and hence monetised of extreme views and spaces. Two interrelated issues need to be addressed in compliance with international human rights law. The first, the criteria for any online expressions or content being identified as borderline or not quite violating hate speech or incitement policies must be publicly stated and clearly defined. To have a confidential policy for demoting content that is close to violating content policies but does not cross the threshold cannot and should not be treated differently without the rationale being elaborated and opened to scrutiny and appeal. Without this process, such confidential policies do not satisfy the lowest standards of legal certainty and raise serious concerns about interfering with controversial expressions explicitly protected under ICCPR Art. 19 rather than hate speech. These intermediate content policies focused on limiting the proliferation of content rather than its removal should be clearly stated, intended and necessary for the prevention of harm to protected groups and minorities from hate speech. The responses taken must then be proportionate to the perceived harm whether algorithmic nonamplification, the appending of warning labels, click-through shield, disabling/limiting engagement or sharing. Such policies and their enforcement measures must not be disproportionate in exceeding what is necessary to prevent harm to minorities, but also must not apply measures which do not achieve the desired aim. An example of this would be the attaching of warning labels or creating click-through shields that do little to mitigate the spread of hateful content, especially when targeted at devoted and ardent following rather than the general public. This is especially true for extremist views and sentiments. The second is that hate speech that does violate SMCs’ publicised content policies should not benefit from any amplification through algorithms or user engagement designed to accelerate virility. Put differently, automated content moderation systems must be able to distinguish virility from violating content and virility from permissible content. When assessing the performance of SMC's under UNGP and by regulators and Governments as well as academics and journalists in preventing and combating hate speech, it should not only be whether hate policies were implemented against violating content or whether they were proactively detected but how much exposure through distribution and broadcast it garnered. As such time before removal of content should not be the only factor in this but also the number of people it reached. This should be referred to as the ‘rate of exposure to harmful content’. A ‘low rate of exposure’ over several days may cause less harm than a ‘high rate of exposure’ over a few hours. Any indication of what is the level of exposure that should be established as an ideal would be to some extent arbitrary. Instead, SMCs should publicly share such data/metrics in their transparency reports and strive to constantly improve their systems and these measurements. 6. Both automated and human moderation must be available in the languages most prevalent on the platform but also the languages of minority groups or others at risk of human rights abuses in particular violence. Commentary SMCs have a responsibility to anticipate and prevent harm against their community members in particular those that are vulnerable and marginalised in particular minorities.42 There is no reasonable or objective basis to offer a higher level of protection in some regions of the world and not others. While there may be commercial and reputational reasons for content moderation resources to be allocated to 42 UNGP. 12

Select target paragraph3