Last month a coalition of women’s organisations led a campaign1 to hold Facebook accountable for its content policy. In particular, how it deals with hateful2 speech and representations of gender-based violence shared by its users. In response, freedom of expression advocates have expressed concern and criticism3 over the precedent4 set by demands for Facebook to remove hateful content from its site. This has spurred debate over gender-based5 hate speech6 the interdependence of human rights7, and the impact of sexist online culture8.
Debate over how to balance9 freedom of expression10 with the right to protection from incitement to discrimination11 is constantly being reframed in the context of new technologies and political realities. As tech leaders struggle to enforce community guidelines for free speech12, rape threats13 shout down the voices of women in online spaces, while14 a bigoted YouTube video15 created in the US has led to ongoing censorship in Pakistan, with court proceedings underway16.
Despite this ongoing debate, there is clear space for agreement on the need for transparency and accountability in how Facebook and other internet intermediaries deal with abusive content, and takedown requests. This point has been made by advocates from a variety of backgrounds, including the UN Special Rapporteur on freedom of opinion and expression, who in his 2011 annual report17 recommended internet intermediaries to
“[d]isclose details regarding content removal requests and [..] establish clear and unambiguous terms of service in line with international human rights norms and principles and to continuously review the impact of their services and technologies on the right to freedom of expression of their users, as well as on the potential pitfalls involved when they are misused”18.
These recommendations align with the UN Guiding Principles on Human Rights and Business19, which includes specific focus on the need for greater access by victims to effective remedy, both judicial and non-judicial. ICT Sector Guidance based on the UN Guiding Principles is currently being developed by the Institute for Human Rights and Business (IHRB) in consultation with various stakeholders20, and recommends:
• implementing emergency flagging procedures where significant adverse human rights impacts are at issue;
• formalising an NGO problem-solving, advisory, or oversight role as part of the mechanism’s processes; and
• establishing more predictability and transparency around how such complaints get resolved, including through indicative time frames, ‘appeal’ processes or requests for review, and engagement with users.
As the #fbrape campaign has discussed, there are significant obstacles in the takedown of content on Facebook that clearly violates human rights. Photos that have been stolen or misused21, violating privacy rights, can only be removed if a request is made by the person portrayed (over 13)22 and if that person lives in a country with privacy laws that require removal of unauthorised photos. Even where all requirements are met, there are no indicative time frames for response, with informal reports suggesting long wait times23. Facebook could reduce the adverse impact of these violations by implementing emergency flagging, along with transparent procedures in line with human rights standards. Rather than relying on local privacy laws, Facebook should consider changing its terms of service to uphold standards of privacy and freedom of expression set out by human rights standards at the international level.
In considering how to respond to hateful speech, there are a number of opportunities for intervention beyond the takedown of content. Even in those cases where a judge rules content to qualify as hate speech, that process can be lengthy, and may not address the underlying impact. Creating a moderated space within Facebook for users to discuss and comment on abusive content would provide a much needed opportunity for victims of abuse to speak out and be heard within the wider community of the platform. This type of engagement with users is an important component to accountability and transparency for internet intermediaries. At a discussion by the UN Committee to Eliminate Racial Discrimination in August 201224 a representative from Minority Rights Group International stated that:
“[..] vulnerable groups were often less outraged by single examples of hate speech than by being denied the chance to speak out and be heard in their efforts to counter statements they considered to constitute hate speech.”
Last week leaked classified documents revealed widespread surveillance and data collection by the US National Security Agency (NSA), performed without judicial oversight. In response, civil society groups from all over the world have rallied to develop coalition statements25 calling26 for greater transparency27 by both States28 and private companies.
This same clarity and shared vision is needed to move forward amidst growing tension over how to respond to hateful online speech. The transparency that is essential to protect our fundamental right to privacy is also crucial to protect women’s rights and prevent censorship.
2 I use the term hateful speech to include speech that does not meet the legal definition of hate speech, but is nonetheless extremely harmful.