A Feminist talk entry published in GenderIT.org (in Portuguese) started an interesting exchange related to the complex fields of freedom of expression, censorship, hate speech, legal remedies, and ICT related violence against women. You must be asking yourself what it was about, in order to start such a complex debate. Well, it all starts with a map: the Take Back the Tech! mapping platform available since the end of 2011. In that map, a little red dot in São Paulo, Brazil, was indicating that something related to violence against women and information and communication technologies was taking place: a women’s rights activist used the map to spread several tips regarding what to do if you live in Brazil and you are a victim of ICT related violence, after interviewing the notary of the 4th Precinct of Crimes Committed by Electronic Means in 2011 (1).



The content in the interview started an interesting debate among some APC team members, a debate that has to do with the boundaries and connections between freedom of expression, censorship, hate speech, legal remedies, and ICT related violence against women .



Asked what to do in relation to the harmful content and gender discrimination that exists on the internet, especially if the provider is located in another country, the notary said in the interview: “Brazilian NGOs have much knowledge and power out there and often through these contacts, get the removal of these offensive sites” (2). This was quite food for thought, if we consider that this same argument and that same power could be used by fundamentalist and/or homophobic groups, many of which also act as NGOs.



And what could this entail in terms of freedom of expression and technology related VAW ? This is an issue constantly discussed in internet rights and communication rights related debates. It calls for us to interrogate the thin line between effectively responding to tech related VAW, without calling for more censorship and privacy violations.



When does speech become “hate speech”?



We have raised this issue many times before: When is a joke not a joke any more? We know Facebook says that rape pages are “joke” pages and we should not to be too worried about them (really?). For instance, one man in Brazil had a website where he detailed “how to kill a feminist”. And that could be easily interpreted as a threat. Should these be actionable? Should that kind of content be taken down? Should providers be pressured or not to take down that kind of content? Who should have the authority to take that down if not the providers? If providers succumb to feminist pressure will they then succumb to right-wing pressure to take down pro-choice pages?



The above relates to how we should deal with hate speech while respecting freedom of speech. When does your right to speak freely harm others? How can we move to our own better understanding of the thin line and proper legal remedy?



This was also one of the major findings in EROTICS project : the biggest barriers or limitations for internet users to exercise their sexual rights are hate speech, trolling and harassment, as in the Brazil case.



So the problem of taking materials down is actually sometimes secondary to this, since it can have more defined solutions (circumvention, legal remedies, pressuring the government). On the other side, the scenario is more grey – how to draw the line and who should?



Facebook’s responsibility in a sexist world



What to do when sexual rights and women’s rights organisations and activists have their own personal or organisational Facebook pages transformed into scenarios for harassment towards them or towards other people affiliated to them? How to draw the line when they become targets of trolling by “well meaning” people who just want to lead feminists and sexual rights activists down the “right path”?



Joy Liddicoat, APC Internet Rights are Human Rights project coordinator asks If we have the same rights online as we do offline (according to the Human Rights Council resolution) then what does that really mean? The hate speech line is tolerably clear, but the more difficult one is the less severe speech that is still violating rights in some contexts.



In May 2009, the report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, Frank La Rue, stated on its paragraph 24 and drawing on paragraph 3, article 19 of the Covenant on Civil and Political Rights, that:



There are certain exceptional types of expression which may be legitimately restricted under international human rights law, essentially to safeguard the rights of others. This issue has been examined in the previous annual report of the Special Rapporteur. However, the Special Rapporteur deems it appropriate to reiterate that any limitation to the right to freedom of expression must pass the following three-part, cumulative test: (a) It must be provided by law, which is clear and accessible to everyone (principles of predictability and transparency); and (b) It must pursue one of the purposes set out in article 19, paragraph 3, of the Covenant, namely (i) to protect the rights or reputations of others, or (ii) to protect national security or of public order, or of public health or morals (principle of legitimacy); and © It must be proven as necessary and the least restrictive means required to achieve the purported aim (principles of necessity and proportionality). Moreover, any legislation restricting the right to freedom of expression must be applied by a body which is independent of any political, commercial, or other unwarranted influences in a manner that is neither arbitrary nor discriminatory, and with adequate safeguards against abuse, including the possibility of challenge and remedy against its abusive application (3).



We also need to point out that one of the factors that increase the complexity of this situation is the existence of several actors and several points of potential interventions. Facebook itself as providing the medium of communication is one of the actors. Should they have the responsibility to filter, manage, and/or censor content that is produced by others on their pages? What is their role, and their responsibility? We could say: ok, it is a fact that we live in a sexist world, but should Facebook be responsible if its online platform reflects that?



According to Jac sm Kee, APC women’s rights advocacy coordinator, this raises a contradiction in our argument: on the one hand, we are telling governments that it is not acceptable to make someone responsible for third party content (for example, thinking in terms of the Evidence Act in Malaysia, where we are saying that you should not be held liable for what someone else posts as a comment in your page). On the other, we are expecting Facebook – which is creating the platform and technical infrastructure for content sharing – to develop policy to deal with harm that results through interaction in the space they created.



About rights and reputation: defamation is a tricky thing online, and can be used by different people to shut other people up. But if it targets a specific named or identifiable individual, is it about reputation, or is it more about a person’s right to safety and privacy? Or could defamation be a realistic remedy for violence against women, for instance? We probably need to draw more parallels with the offline rights.



To regulate (and how to) or not to regulate



What happens if Facebook is the platform where the general promotion of a particular culture of misogyny takes place? What would Facebook’s role or responsibility be? Since they are not the creator of the content, but the enabler, should it be completely user defined (e.g. votes by users)? Well, trolls might end up twisting the purpose of this. Should it be about creating mechanisms for letting other users complain to the content creators, or to inform other users in the community that a certain page is sexist, and misogynist?



Should just “taking down” be the least restrictive means of doing this? Is it a proportionate measure? Perhaps a different way of doing this would be to caution, sending a three times warning and if there is no response, so then taking the content down? A right of reply on the page is another possible measure, to send an apology… and a controversial one: the account suspension!



Should Facebook have an independent and multi-representative body made up of, for example, different rights based groups from diverse parts of the world that can give advice and develop company policy on freedom of expression or privacy issues?



Or should Facebook count with the advice of something like the Wikipedia community of users who make decisions on veracity of content, what to do when their pages get hacked, etc? Even though there might be issues of gender disparity in active community of users, at least this is something that can be work with in terms of capacity building: a new kind of engagement with internet governance in practice.



No clear or definitive responses so far, but maybe there are a couple of things that we can ask them to do better, such as reviewing the removal of content policy according to the points mentioned before: is it clear, is it transparent, is it defined, is it based on rights? The human rights would be a safe framework to start from or work with, leaving out of this the national security or public morals.

Footnotes

(1) You can read the complete blog post in Portuguese here: www.genderit.org/es/node/3652

(2) The language of the original quote was Portuguese: “As ONGs brasileiras têm muito conhecimento e poder lá fora, e conseguem muitas vezes através destes contatos a retirada destes sites ofensivos”.

(3) Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, Frank La Rue (May 2009), United Nations, www2.ohchr.org/english/bodies/hrcouncil/docs/17session/A.HRC.17.27_en.pdf

Add new comment

Plain text

  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <br><p>