A/HRC/26/49
4.
Models of regulation initiated by Internet and social media providers
51.
Internet and social media providers also have set up internal regulatory frameworks;
for example, Google has implemented an internal regulation policy that allows users to
report content that they believe should be flagged for removal by Google services
(www.google.com/transparencyreport/removals/copyright/faq/). Through the system, users
submit separate requests on an online form for each service from which they believe
content should be removed. These are considered legal notices, each of which is entered in
a database, called “Chilling Effects”, on requests for the removal of information from the
Internet. The purpose of Chilling Effects is to protect websites from cease-and-desist orders
issued by people claiming that posted content is copyrighted, and to determine whether the
content is illegal or violates Google policies. While Google has set up a process for
removing offensive information, such as racist contents or hate speech, the final say in
determining which content to remove or hide remains the sole prerogative of Google.
52.
Similarly, Twitter adheres to a policy on reporting violations of the website’s terms
of service (http://support.twitter.com/articles/20169222-country-withheld-content), which
comprises an internal process of deciding which tweets should be removed for violating
terms. The policy only highlights which content is not allowed, such as impersonation,
sharing a third person’s private information, and violence and threats, which raises the issue
of lack of transparency in this internal regulatory process.
53.
Another social media platform, Facebook, has also set up a removal policy for
offensive content, although there have been criticism to its apparent lack of transparency.
The Special Rapporteur was informed that users have been confronted with the arbitrary
removal of content deemed abusive or offensive without being given. He also learned that
users who reported racist or offensive content had not seen it removed, despite complaints
submitted to social media moderators. Although Facebook has an extensive list in its terms
of service (www.facebook.com/legal/terms) of prohibited content, users still do not know
exactly what content will be censored, removed or left unattended, given that the platform
retains the power to remove posts and content giving any reason. This lack of transparency
has raised concern for users, some of whom believe that the process and reasons for
removing posted content are unfair and prejudicial, while other requests to remove racist or
offensive material are ignored.
54.
The Special Rapporteur notes that Internet providers and social media platforms
seem to be increasingly adapting their norms and policies to the State where users post
content. Providers and platforms accordinglyrecord, store and provide information on users
and information accessed, sometimes a prerequisite for permission to operate in the country
concerned. The Special Rapporteur also reiterates his concern at the issue of what content is
be considered “racist”, “illegal” or “inciting hatred”, and his views that Internet providers
and social media networks should not make decisions regarding user-generated content, and
take such actions as removing or filtering content on their own.7 He also agrees with the
Special Rapporteur on the promotion and protection of the right to freedom of opinion and
expression that censorship measures should never be delegated to private entities, and that
intermediaries such as social media platforms should not be held liable for refusing to take
action that infringes individuals’ human rights. Any requests to prevent access to content or
to disclose private information for strictly limited purposes, such as the administration of
criminal justice, should be submitted through a court or a competent body independent of
political, commercial or other unwarranted influences. 8
7
8
14
See A/67/326, para. 48.
A/HRC/17/27, para. 75.