How gender-based harassment falls through the digital cracks?
The internet and new technologies provide countless opportunities for women’s empowerment, engagement and education. At the same time, digital tools are increasingly facilitating gender-based threats, harassment, assault and violence against women online. Internet intermediaries have an important role to play in detecting and redressing online violence against women. Yet the response of intermediaries to incidents of gender-based harassment online is too often defined by gendered assumptions and misconceptions about the nature of violence against women online and the tension between addressing online violence and protecting free expression. This series is produced as a part of APC’s End violence: Women’s rights and safety online project and will explore the responsibility of intermediaries to ensure that the internet is a space that empowers, rather than subjugates, women.
When did the internet transform from being a democratized space and tool of empowerment, that has franchised hundreds of millions of women and girls, to an arena of gender-based hatred? Was it when Anita Sarkeesian was virtually mauled by an online women-hating mob who subjected her to “a staggering tidal wave of hate and harassment,” including threats of death and rape, for creating a Kickstarter project that aimed to unpick female stereotypes in videogames? When Facebook became a place where women who spoke out against misogynistic Groups are met with photoshopped pictures of their own face , beaten and bloody, and when women democracy activists have their campaign pages suspended Or when Twitter became a delivery mechanism for bomb threats and rape taunts ?
The staggering frequency with which women experience technology-related violence and harassment suggests that the internet – once an alternative to the male-dominated mainstream media – is becoming an increasingly closed space for women to connect, share ideas and express themselves. Internet services and social media sites such as Twitter, Facebook, Rediff, Tumblr, and Google+ are providing a platform for the expression of ideas that demean and threaten women. The whole gambit of violent intimidation – threats of rape, death, mutilation, domestic violence; the use of misogynistic words like “slut”, “cunt”, “bitch”, and “whore” to bully and harass; the publication of images that denigrate and violate the female body; to name but a few – is being run by “trolls” and other internet users that see such platforms as yet another means of female subjugation.
The entities that host such sites – and thus facilitate such violence and harassment – have failed to take a principled and comprehensive stance against the perpetuation of gender-based hatred and violence online. In the specific instances in which online threats to women have become public, the initial position of the intermediary has inevitably been an appeal to the importance of facilitating all views and opinions, even those which offend. A secondary reaction has been for the intermediary to establish or improve avenues for users to report incidences of violence against women for the moderation of the platform provider. Both of these responses have been inadequate to address the deeply entrenched and widespread culture of misogyny that exists online, or to provide redress for specific instances of gender-based harassment.
Why have the responses of internet intermediaries to technology-related forms of violence against women been so inadequate? A number of key gendered norms and assumptions have informed, and thus restrained, the approach of internet intermediaries to gender-based violence online:
- Speech that trivializes or glorifies violence against women does not amount to hate speech
In response to the #Fbrape campaign by Women, Action and The Media against the targeting of women with images and content that threaten or incite gender-based hate, Facebook argued that, while the platform permits “hate speech”, there are instances of offensive content, including distasteful humour, that are not “hate speech”, and thus do not justify immediate removal. Facebook’s statement went on to equate gender-based hate speech with “insensitive or cruel content”, revealing a fundamental misapprehension of the destructive and threatening nature of gender-based hate speech. Such speech must be seen in the context of historical and institutionalised violence and discrimination against women, and the monumental power differential that persists between men and women. Equating gender-based hatred with insulting remarks only further undermines the position of women.
- Harrassment online does not amount to violence unless there is the probability of “imminent harm” or “real violence”
This misconception clearly factors into the risk analysis of internet intermediaries when judging their approach to gender-based hatred and harassment. It reveals a lack of understanding of inappropriateness of concepts such as “imminent” or “genuine” for women who experience rape and domestic violence as pervasive threats. It also fails consider the very real effects of violent and sustained harassment, including anxiety and changes in behavior. Such an attitude is clear in the lack of response by Twitter to the various incidences of rape and death threats received by prominent feminist activists throughout 2013. Were such threats made by other means they would be view as indicative of very real and imminent harm and would be immediately reported to police, yet when made on Twitter they were diminished.
- Common misogynistic slurs do not present a real threat of violence
There is a clear perception on the part of internet intermediaries that the use of common misogynistic slurs such as “bitch”, “slut”, “whore” etc. have reached such frequency in the mainstream media that their employment as a means of harassment or discrimination online is acceptable in all circumstances. Facebook’s response to the online hatred directed at Icelandic woman Thorlaug Agustsdottir revealed the problematic attitude of the site to hate speech: “It is very important to point out that what one person finds offensive another can find entertaining – just as telling a rude joke won’t get you thrown out of your local pub, it won’t get you thrown off Facebook.
- Online reporting mechanisms are sufficient to ensure that gender-based hate speech is brought to light
In the aftermath of recent threat rape and bomb threats distributed via Twitter, the platform announced the introduction of an In-Tweet report button, with which users can report behavior directly from a tweet. However, no moderation is instituted, meaning that if offensive or violent tweets go unreported – because the recipient fears reprisals, for example – then no action will be taken. The burden is thus on the recipient of harassment to take action, and risks delegating the responsibility of mitigating and removing online violence against women to women.
A further problem with reporting mechanisms was illustrated by the case of Dana Bakdounes, a Syrian woman and member of The Uprising of Women in the Arab World Facebook group, which had its administrators suspended after photographs of an un-veiled Bakdounes were posted on its site. A tide of reports of offensive content in the group – undoubtedly received from those who protested Bakdounes’s right to be pictured without a hijab – were interpreted in a way that further silenced women.
The photo credit Scott Beale / Laughing Squid. _ Used under CreativeCommons Attribution-NonCommercial-NoDerivs 2.0 Generic licence .
10 Dec 2014 - 04:44 on Collateral damage of the cyberwar in Syria
12 Jan 2015 - 03:30
25 Sep 2014 - 17:09
20 Aug 2014 - 13:57
10 Jul 2014 - 13:13