Over the past two decades, the internet has expanded the ways people communicate, opening a new medium for expression, especially for dissidents and minorities. This has affected the larger demographic of women enabling them to not only express themselves freely but also empower them economically. The open and outwardly free spaces online give the perception of security. However, it is no longer a secret that women are being attacked for using these spaces while getting new identities is becoming easier for criminal hackers and users.
While internet access for women is still far behind that of men, especially in developing countries, more women use social media sites. Social networking sites make up a major part of the dialogue and communication happening on the internet. These sites not only help women connect beyond borders but also enable them to engage in businesses transnationally, seeking out opportunities, taking online courses, engage in political and social discourse, and be a true part of this global network.
However, technology has also enabled violence due to the ways information travels at unprecedented speeds. These highly-trafficked social media networking sites are coming under increasing scrutiny for failing to responding to women users’ complaints of abuse and/or harassment. While technology companies cannot contain acts of violence, especially as it is difficult to prosecute abusers and access justice across borders, The abuse could be reduced and tackled. Simply identifying harassing users is not enough, as it doesn’t reduce future harassment. Tech corporations have to protect the rights of the users who actually power these companies, making them in turn responsible of users’ security.
“If someone who did not know the entire story saw it,
they would probably think that it is a sweet love story.
However, I was going through a horror story.
As soon as I saw the video, I immediately ran
to take pills to calm my nerves down.” – V. from Goražde
Yes, social networking sites have helped women feel connected with each other, forming strong support systems, but women have powered the profits of these sites as well. Women send more tweets, drive more Facebook advertisement and have better engagement on these sites. However, through these activities, women face online harassment and violence. It is usually believed that only vocal women, advocates and feminists face harassment; however, a user has simply to put her picture on display, have a feminine name to attract harassment and sexist, intimidating comments online.
Out of 4,000 cases of cyberstalking reported since 2000, 70% of the victims were women users. Women increasingly face sexual and violent threats on all of these sites. “Gamergate”, where gamers around the world sent rape threats to a female game developer, is just one recent example. While the Gamergate community wants to bar any major female involvement in gaming (except having sexualized female figures in every other gaming title), the regular violence and sexual harassment on social networking sites is not necessarily targeted to just gamers or only vocal women. Indeed, women who are more vocal, share political or social opinions, are accomplished in their respective fields attract more harassment, but is it safe for those who try to stay silent? Are the intermediaries’ and the states lack of support actually encouraging the silence of women online?
In response to the abuse and threats that we women face, are the technology big names like Facebook, Twitter and others taking tangible steps? Known as internet intermediaries, these organizations are key drivers in the development of the internet as well as in distributing content. Intermediaries host, locate and search for content and facilitate its distribution, according to a definition by World Intellectual Property Organization (WIPO). Intermediaries like Facebook, Twitter, YouTube, Instagram, WordPress, and others have gained increasing influence in recent years.
In the last couple years, these organizations have started responding to calls by victims. However, while the policies may have been improved, they are not victim-centered.
The Association for Progressive Communications Women’s Rights Programme started a multi-lingual campaign titled “What Are You Doing About Violence Against Women?” under the flagship Take Back the Tech! campaign in July 2014. It demonstrates how intermediaries can play a substantial role in providing safety to the women while keeping the spaces online and free of abuse. Targeting the internet’s big three (Facebook, Twitter and YouTube), the campaign made clear demands of these intermediaries. They asked the corporations to take a clear stand on violence against women in their terms of service (ToS) and engage with diverse civil society to find solutions for safer platforms.
The “Take Back the Tech campaign #whatareyoudoingaboutVAW” had global outreach, with the Washington Post, Time, Fortune, O Globo, Yahoo France, the New Indian Express and more covering the campaign. While it is too early to assess whether intermediaries have done anything due to this campaign, it is important to note the media coverage of the campaign did generate a platform to reach internet intermediaries, so they could hear the complaints and suggestions on how to improve when it comes to tech-related VAW.
One of the primary tools that attracted media interest and public engagement was Take Back the Tech! report cards that were used to grade the big three on issues of reporting and redressing violence, transparency, and engagement with stakeholders. These report cards were also disseminated at the Internet Governance Forum 2014 in Istanbul where, thanks to its timely input, several corporations committed to learning more about gender in digital spaces.
Yes, you gave us the “Report” button: What next?
New research developed by Rima Athar for the “End violence: Women’s rights and safety online” project on Building Women’s Access to Justice: Improving corporate policies to end technology-related violence against women elaborately talks about specific problems and offers guidelines on how internet intermediaries can improve their response to VAW online. Athar has identified some clear problems in the narrative of representatives who attach the responsibility for the safety of their users to law enforcement, court orders, and the victims themselves.
Rima points out that while it is essential to have law enforcement agencies and court on your side, debate cannot be solely limited on these terms; that intermediaries’ responsibility is not only equated with their legal obligations; that onus cannot be put merely on state services and individuals to end VAW, and more importantly, that intermediaries will have to take responsibility of this equation too considering their platforms are being widely used for violence against women.
The research identifies these violations committed through social networking platforms:
- Creation of ‘imposter’ profiles of women; often to discredit, defame and damage their reputations.
- Spreading private and/or sexually explicit photos/videos; often with intent to harm, and accompanied by blackmail.
- Pages, comments, posts, targeting women with gender-based hate (misogynistic slurs, death threats, threats of sexual violence, etc.)
- Publishing personal identifying information about these women including names, addresses, phone numbers, email addresses without their consent.
Many companies have a mechanism to allow victims to report abuse, threat, and harassment they face online. However, although the Report it! button may give consolation to victims/survivors, it is uncertain how effective these mechanisms. Women often report having received no response. It is also difficult to assess the effectiveness and suggest improvements on this reporting mechanism as intermediaries do not detail their processes in public. Do they train their staff in sexuality, law, gender, and human rights? This lack of transparency furthers the issue of trust between intermediaries and women users of these spaces.
“The least these companies could do was [sic] interact with me at that point in time and assure me. I understand that anyone can make ghost complaints, but there should be mechanisms for assessing your complaint. I being a victim should be understood by these managements. After assessment of complaint, they should actually be interacting with the complainant to assure her of any concrete action they would be taking. The whole mechanism should be made easy for the complainant, not add piles and piles and layers of work and burden on the complainant who is already under pressure and depression. Also, the least Twitter could have done was to offer me a verified account, obviating the need for hundreds of complaints related to abusive fake (impersonation) accounts in my name, which were using my picture and bio-data.” – Baaghi from Pakistan
During this research, APC’s team explored in depth the policies of Facebook, Twitter, and YouTube and identified how these companies lack a social context when formulating content-regulation and privacy policies with regards to VAW. Facebook and Twitter have often taken nudity as obscenity, gender-based hate as humorous, thus categorizing it under free speech, and are also engaged in normalizing graphic violence. This extensive research gives examples of cases where these major corporations failed to recognize the crucial differences between content’s social context and how it affected – or can affect – victims/survivors.
It is necessary to mention YouTube’s policies here as it has provided its users with the most holistic approach to tackle abuse. YouTube has clear distinctions on what is unacceptable content, “It’s not okay to post violent or gory content that’s primarily intended to be shocking, sensational or disrespectful” furthering the point to violence, “if your video asks others to commit an act of violence or threatens people with serious acts of violence, it will be removed from the site”. The research by APC calls YouTube’s policy “an example of heightened transparency and responsibility towards users regarding violent and abusive content.”
Aside from demanding intermediaries to consider social context and understand essential differences in violence and gender-based abuse, here are the primary points that Rima has drawn to frame necessary policy changes/inclusions in intermediaries’ response to violence against women:
- Strictly prohibit the publishing of private, confidential, and/or identifying information of others with clear definitions of what constitutes private and publically available. Intermediaries should have minimal obstacles in taking down pages, posts, or content in relation to privacy concerns specifically when it is accompanied by threats. If companies fail to take action, there should be clear accountability measures necessitating at least a clear response to complainant. The report talks about an upsetting case of a Pakistani blogger, Baaghi, whose national identity card, marriage certificates, home addresses of last 10 years and more such private information were shared online, ending in an assassination attempt on the blogger.
- An important point made in this report is to address the English language bias in the reporting mechanism. While the primary content may be available in many languages, the reporting mechanism doesn’t seem to be. It is also uncertain if staff are capable of processing multi-lingual requests. In one case, S. from Sarajevo (Bosnia and Herzegovina) tried to report a fake Facebook profile that was created to damage her reputation. She had to have help from OneWorldSEE to report the profile as the forms were only available in English.
- Promote Mutual Legal Assistance Treaty (MLAT) reform to increase access to justice in cases of technology-related VAW.
- Provide greater transparency and accountability regarding (in)action on content and privacy requests as in many cases women reported either getting a complete lack of response or only an automated response.
- Provide greater transparency and public accountability on the departments and staff responsible for responding to content and privacy complaints.
- Reserve the right to terminate accounts specifically on the basis of repeated gender-based harassment, hate and abuse.
- Ensure systems-wide removal of individual content (photos, videos, tweets) at their source.
- And finally, “engage with experts in gender, sexuality and human rights to provide input into policy formation, staff training, and the development of education/ prevention programs.”
These case studies, along with policy suggestions for both the internet intermediaries and local government’s telephony companies create a strong framework to begin tackling violence against women online. While the community applauds social networking sites’ increasing support to victims, there is still a long way to go before online spaces are truly free for women, not asking them to keep private accounts or watch before they express their opinions.
This, however, cannot be achieved solely by intermediaries alone. These corporations will have to take in women rights’ groups, gender researchers, users, and civil society along with, of course, the government and law enforcement to create a holistic approach to the problem. Involving civil society and users is essential: last year, Twitter rolled out a major update where a blockee would be able to see the timeline of the person who has blocked the user essentially scrapping the most important aspect of blocking, as some commented.
Intermediaries often attempt to protect victims/survivors without input from users or considering social contexts. The flailing trust of women in these companies is increasing due to the failure of intermediaries to recognize VAW online, arrogance of providing purported free spaces, failure to uphold women’s rights, and a history of condoning violence under free speech. This framework can provide some first steps to safeguarding the majority who use these services.
Image by Ghazaleh Ghazanfari used under Creative Commons license