Going online is the same as going out to a rally
Thirty people from six continents met at the APC “Dialogue on digital security and women's human rights defenders” to discuss regional and global trends on digital security, freedom of expression and freedom of association, and their impact on women's human rights defenders. Participants came from different organisations such as Amnesty International, APC, AWID, the ‘Violence is Not Our Culture’ campaign, Witness.org and the Women Human Rights Defenders International Coalition.
Katerina Fialova and Sonia Randhawa interviewed two of the participants, who chose to remain anonymous, to protect both their work and those they work with.
Katerina Fialova (KF) and Sonia Randhawa (SR): What are key concerns for privacy, and the security of women human rights defenders, based on debates at the dialogue, and your experiences?
R1: Though new technologies offer women human rights defenders new spaces and opportunities to promote and protect human rights, they can also be used to control their work and cause them harm. Women human rights' defenders are increasingly targeted or harassed through new means of communication, be it social media websites or their mobile phones. These threats can be difficult to address because they create a blur between what is public and what is private. Defenders don't automatically have a different Facebook profile for their friends and families and another one for their work. Therefore the personal information they give can be used against them as a measure of retaliation for what they do in their work.
If you track someone on a social media website, it is easy to see who their friends and family are, to see what they do, when and where. It's also easy to infiltrate those networks by pretending to be someone you are not. How do you defend yourself against those abuses or infiltrations? It isn't straightforward for people that may not have been trained in digital security. Often you realise the consequences of what you do online when it is too late, when the information is already out there.
By making connections more visible, social networking makes it is easier to target defenders by doing harm or monitoring their friends and family. The daughter of a human rights defender recently received threats on her Facebook account. She was told everything that was happening to her was because of the work of her mother and that she is not taking care of her family properly. Even if a defender is careful about what they say on the Internet, others around them may not be. They may upload pictures of where the defender lives, who they know or when they are away.
Another concern is around access to resources. Defenders see the added value that new technologies can provide, but some don't necessarily have their own equipment. Instead, they use existing resources, such as internet cafes, to publicise information about their work, or to communicate with victims.
but you also have to take into consideration where you're using these services
Internet cafes are really unsafe in some parts of the world. You have to be careful about the services you seek online, but you also have to take into consideration where you're using these services. Defenders may have secure email accounts, but if they are using an internet cafe, all the work that is done to encrypt data can be void because there are malwares which can keep a record of their activities and what they have typed on the key board. And if there is a camera in the internet café, as there often is, it is easy to put a face to these activities.
R2: Well obviously in different contexts, the trends are quite different. Our experience is with people who are sending us human rights information and we're having a conversation about it.
Among these people, a key problem is misunderstanding how to use some of the available tools, such as encryption. Even if we're trying to send encrypted messages, if there's anybody in the chain who doesn't know how to use encryption or doesn't have a system to encrypt, they send it back non-encrypted and that information goes out to everybody, putting everyone at risk. Even if we're trying to be secure, responses are often not secure.
Another problem is that defenders are getting into trouble when they use digital tools unsafely, but there is a real lack of documentation of these cases. This means that when they land in trouble, we don't know if the harassment or troubles they face are linked to their insecure communications. In order to do make that link, there's a lot of training and literacy needed.
and decide what we're going to do about it.
A key example is around facial recognition. There might be some activists who are active online, and when they go to a protestfacial recognition software recognises them, exposing them to great risk.
We need to pay attention to this issue and decide what we're going to do. Is it that you need to entirely cover yourself every time you got out to a protest or are there other ways you can protect yourself? It's important to recognise that this is a disclosure issue. Many people (including me!) don't really understand facial recognition. There needs to be education on that, because the technology is already being used.
One of the reasons that people don't pay attention to digital security is because they don't see the internet as a public space. Going online is the same as going out to a rally, so you should think of the same security precautions. People feel secure because they are at their house, or on their computer and don't recognise that it is a public space, so there also needs to be education on that.
KF/ SR: What is your organisation doing to respond to these digital security needs of WHRDs?
R1: We take gradual steps to support human rights defenders in increasing their capacity to respond to issues of digital security and to reduce their vulnerability to these threats. It starts by creating awareness with partners, raising the issue of digital security whenever we can, providing guidance, etc. For instance, when we go on missions, we provide informal guidance to people that want to undertake human rights work. We also organise formal training sessions and increasingly integrate digital security in workshops around human rights documentation.
We also fund local organisations to run digital security workshops that are tailored to the human rights situation in their region and we ensure that these training sessions have a gender perspective. When a human rights organisation seeks assistance to organise a security training for its male members only, we discuss the rationale of the training with the applicant. Why do they think it is necessary to have only males in the training? Are WHRDs not facing threats as well in their community? The response is often that the threats WHRDs are facing are not considered acts of collective violence; but are instead inherent to their private life (for example, domestic violence or honour crimes). By having these discussions, we encourage applicants to build gender perspectives into their projects.
R2: There is a big question around the duty of care for all human rights organisations when they're interacting with partners. There needs to be more clarity on defining each person's duty of care to inform and educate others. You can have lots of trainings, but it's about changing the culture. And about recognising those sorts of dangers and threats and not taking them lightly. So if you're committing your information to me, or I'm committing my information to you, we need to talk about security. We need to see this as basic disclosure.
We've really been encouraging people in our organisation to take that approach. In trainings we have that are not specifically on digital security but on informed consent or ethics, we raise some of these issues, particularly the need to recognise that human rights information is personal information.
because she's stepping out of a traditional role.
Also, it's important to recognise that the risks are different for different groups. For example, the closed country question is really interesting, where there are restrictions on data flow in and out. People who are skirting those to put information out are at a high level of risk.
And of course if the blogger is a women, there's an added layer of risk, because it often means that she's stepping out of a traditional role. This risk is not just out there in the internet, but also in communities and families, where there can be risks to being an activist.
So, just as we recognise that there are added risks to women at rallies or protests, we need to recognise that women being activists online also assume added risks. They are still out there, they still have an identity.
KF/ SR: You're speaking about gaps for women human rights defenders when they get in a situation of digital insecurity. Do you see these gaps? What are they?
R1: The digital divide – having unequal access to IT equipment, services and support – is a major obstacle for WHRDs to address situations of digital insecurity. It is difficult to develop a digital security strategy when you don't own your IT equipment, have limited access to the internet and when IT training is often targeting a male audience.
The question for us is how to support WHRDs in overcoming this digital divide. We're definitely getting there: there are cheaper tools, different initiatives to communicate information in different ways, animation, cartoons to make information more accessible, etc. But there's a lot more to be done.
train the people around the defenders.
Even if you work with defenders to ensure WHRDs have equal access to equipment, services and support, the blur between private and public remains and continues to expose them to risk. With social media sites you arguably need to train the people around the defenders. Defenders need to have a way to share information about digital security to the people around them. To take the example of the daughter who was targeted for her mother's work, should she also attend digital security workshops, though she may not want to engage in human rights work? That is where the risks are occurring, you can train defenders, but if people around them are not cautious, then it undermines their safety.
KF/SR: You as an organisation, what are your security policies and practices to keep yourself secure when you are working online?
R1: We have our own servers on which information is encrypted. Our email provider makes encryption really easy for people and we are regularly prompted to change our passwords. However; there is an argument that what is needed is not new tools or equipment -- it is behavioural change in how we work and how we communicate with others.
but getting it to us puts people at risk
Another challenge is how to deal with individuals who spontaneously come in contact with ,a human rights organisation, local or international, to share sensitive information in unsecure ways. When you have existing relationships with HRDs, you can provide them with guidance and advice. However, when you are contacted first, it is more difficult. We can secure information once we have it, but getting it to us puts people at risk. We are exploring developing web pages to inform the general public on what the risks are to sharing sensitive human rights information and on the precautions they could take.
Despite all our policies and the resources, colleagues in the organisation continue to receive threats. We receive hate mail through the organisation's email, and through Facebook or Twitter accounts. A colleague recently had altered pictures of her posted on a social media website. She reported it to the site. But even if the photos are taken off or the account of the person who posted them is closed, the harm is done. It will not stop the pictures from circulating and the individual could simply open a new account from a different computer and post the pictures again. Twitter said they would look at what they could do, and what they could do is close the account.
KF/ SR: Can you give support to those WHRDs who are targeted, within the organisation?
R1: There is the digital security training aspect that they undertake, so they have parameters on Twitter that makes their account more secure and not accessible to the general public. There's also a confidential helpline which provides an emotional support network, and allows them to talk about the issues, and unpack what happened. We're very protective of the people who work with the organisation, including their names.
KF/ SR: How can we, as organisations engaging around security and safety of activists, respond collaboratively to digital security violations, particularly for women human rights defenders?
R1: Together we can share experiences and analysis and highlight what resources and tools are available. This is very useful to discuss emerging trends, see where there are gaps or unnecessary overlaps and to develop adequate responses. In terms of what we can do now, we need to build the capacity of women HR defenders to deal with digital security. So, if we're concerned that women human rights defenders are using equipment they don't control, what training can we give to ensure that the computers they are using are secure, even if they are using a colleague's computer or an internet cafe.
R2: It is essential to share resources and discuss them. There were so many conversations about what different risks there are, and to hear what other organisations are going through. Often those in the human rights community don't talk to each other about things that they're worried about. But the only way to get to where we need to be is to have an open and honest dialogue.
or get defenders to understand the risk.
The other key need is documentation. We often can't link someone's online activity to the risks they're facing and that we can't then link that to the lack of laws or systems to seek redress and this is problematic. Until we have documentation, we'll just be saying, well it seems that the trend is ‘x’. If we want to take anything forward, we need proof through documentation.
We need to think about howwe can train up women's human rights defenders to recognise the risks they are facing online and understand the context of how that affects the risks they are facing in their physical life. We need to work together and get evidence.
This isn't about learning sophisticated computer knowledge. I'm fairly technically savvy, and I don't know if there's a virus on my computer unless another program tells me. It's about understanding the risks that you face. For example, if you were arrested and interrogated, and some of the information they have you don't know how they got it, it may be that your online work is unsafe. Then we need proof, we need to be documenting it.
Until we know what's happening, it is difficult to advocate for change, or get defenders to understand the risk. If we can say this happened, we're not trying to freak you out, if we had a couple of case studies, that are solid, where nobody could question that those two activities are related, that would go a long way. It's a huge mountain, but we need to start climbing it.
KF/ SR: What are the strategic processes and spaces we should be intervening for policy advocacy, and which of them do you actually use in your advocacy work?
R2: There are so many angles. States need to take women's security and their debates on violence against women (VAW) seriously and to recognise the importance of online behaviours. If someone is being cyberstalked, that means very real threats to that person. If governments aren't taking steps to address that by training the police, by ensuring they have systems and laws in place, then there is space for advocacy there.
They are actors that have responsibility.
And there's a lot of advocacy that needs to be done around businesses. There are businesses that are facilitating the ability of governments to undertake surveillance of women's human rights defenders. There is a framework around corporate social responsibility and accountability. And there could be more investigation of applying the UN Guiding Principles on Business and Human Rights to data security and freedom of expression on the internet. They are actors that have responsibility. But it comes back to the documentation. If we don't have the cases of people not being able to access justice, it will be very difficult to build advocacy around it.
taking something through the legal system.
Then there are various UN forums of course, which are important. But I also think there could be some interesting work in test litigation, and actually taking various players to court to test the laws that are in place, to see whether they are providing the kind of protection that is needed. You can create so much change by taking something through the legal system, you can change the interpretation of a law, and interpretation of the law is something that is being abused by so many, not just in the area of digital security.
KF/SR: What is your view on anonymity in the context of VAW? On one hand anonymity is an essential part of women's privacy, personal safety and freedom of expression & association online. On another hand, stalkers, trollers, harassers of women often hide in the anonymity of the internet. How can we apply VAW frameworks without it being used as a pretext for more censorship or reducing anonymity and privacy online?
R2: Not all VAW on the internet is from an anonymous user. We know that there is bullying on the internet, that women are stalked by people they know, and the fact is still that there are police forces and governments that don't take that threat seriously. Women should be able to take a restraining order against people for their behaviour online. So I agree with you, but I also think that there are a number of cases where having a VAW framework and the responsibility to protect would be quite helpful.
We engaged in a discussion on the boundaries of security, because the authorities automatically seem to go to a 'we need to protect you' mode which involves violating rights. And they do that offline as well. It's a tricky one, but we need to set the dialogue on what is appropriate protection, rather than just stripping away anonymity.
KF/SR: Thank you for the interview.
Photo by Julien Harneis. Used with permission under Creative Commons licence 2.0.
This article was written as a part of APC's “Connect your rights: Internet rights are human rights” campaign financed by the Swedish International Development Cooperation Agency (Sida)
10 Dec 2014 - 04:44 on Collateral damage of the cyberwar in Syria
12 Jan 2015 - 03:30
25 Sep 2014 - 17:09
20 Aug 2014 - 13:57
10 Jul 2014 - 13:13