WARNING:

This report contains descriptions of violence against women. 

Companies, wherever they operate in the world, have a responsibility to respect all human rights.

This is an internationally endorsed standard of expected conduct. The corporate responsibility to respect requires Twitter to take concrete steps to avoid causing or contributing to human rights abuses and to address human rights impacts with which they are involved, including by providing effective remedy for any actual impacts. It also requires them to seek to prevent or mitigate adverse human rights impacts directly linked to their operations or services by their business relationships, even if they have not contributed to those impacts. In practice, this means Twitter should be assessing – on an ongoing and proactive basis – how its policies and practices impact on users’ rights to non-discrimination, freedom of expression and opinion, as well other rights, and take steps to mitigate or prevent any possible negative impacts.

Amnesty International acknowledges that Twitter has recently taken steps towards addressing the problem of violence and abuse against women on the platform. In a response to the organization, Twitter reiterated that abuse and hateful conduct directed at women are prohibited on the platform, and characterized that this issue is one that they are “energized and motivated” to address. The company also highlighted several positive changes to their policies and practices in response to violence and abuse on the platform over the past 16 months, including a tenfold increase in the number of abusive accounts against which action has been taken.

However, Amnesty International believes that Twitter is failing to adequately meet its responsibility to respect human rights in the context of violence and abuse against women on the platform as the steps it has taken are not sufficient to tackle the scale and nature of the problem. Women have the right to live free from discrimination and violence. They also have the right to freely express themselves, both online and offline. Twitter’s policies and – in particular – its practices clearly fail to respect these rights. As one of the world’s leading social media platforms with over 330 million monthly users, this failure has a serious impact as it contributes to the silencing of women’s voices online.

In the words of US Black Lives Matter Activist, Miski Noor,

“Twitter needs to hone in on their responsibilities and their values. I’m tired of tech companies or social media companies thinking they are exempt from living their values. If Twitter values women and femmes, if they value our safety, then they need to have practices that they actually develop and implement in real ways that will protect us.”

In a response to the organization, Twitter said it disagreed with Amnesty International’s findings and that it “cannot delete hatred and prejudice from society”.

Lack of Transparency

Transparency is a key component of Twitter’s human rights responsibilities. In this regard, Twitter’s reporting mechanisms must be accessible and transparent. It is impossible to assess the effectiveness of these mechanisms for social media platforms more generally when companies like Twitter give little information about their internal review processes, including, how complaints are dealt with, the ratio of company moderators to the volume of reports, the type and level of gender-and other identity based human rights training the staff receives and the time limits and targets for reviewing reports.[1]

According to the UN Guiding Principles on Business and Human Rights, the responsibility to respect human rights involves having both policies and processes through which businesses can both “know and show that they respect human rights in practice”. ‘Showing’ includes “providing a measure of transparency and accountability to individuals or groups who may be impacted (such as users) and to other relevant stakeholders, including investors.” The Guiding Principles also state that companies should communicate how they respect human rights in practice in a number of ways, such as formal reporting that provides indicators on how they identify and address adverse impacts on human rights.

In July 2017, Twitter stated they were taking action on 10 times the number of abusive accounts every day compared to the same time last year. They also said they had removed twice the number of accounts of repeat offenders who created new accounts after being suspended. In terms of user behaviour, Twitter also stated that accounts that had been put into a period of limited functionality following a violation of the Twitter rules generate 25% fewer abuse reports, and approximately 65% of these accounts only enter this state once. While these statistics indicate an improved response to violence and abuse on the platform, they do not provide sufficient information to understand how significant this progress is relative to the overall scale of the problem.

In January 2018 Amnesty International wrote to Twitter asking for disaggregated figures in relation to the number of reports of abuse on the platform and the number of reports found to be in violation of Twitter rules among other requests for statistics. In their response, Twitter outlined a number of steps it is taking to combat violence and abuse on the platform but ultimately refused our request. Twitter stated that this information can be both uninformative and potentially misleading because users regularly report content with which they disagree or, in some cases, with the direct intent of trying to silence another user’s voice for political reasons. Twitter also stated that there is a misperception that the volume of reports impacts their enforcement decisions. Whilst Amnesty International agrees that such figures must be not be taken as providing a complete picture of violence and abuse on the platform – detailed statistics on reports of abuse can help set a baseline, and potentially targets, for response times to reports of abuse.

Enabling and Empowering Users

Enabling and empowering users to create a safer and less toxic Twitter experience is a key component of Twitter’s responsibility to respect human rights on the platform. Part of this responsibility means that Twitter must enable and empower users to understand and utilize individual security and privacy measures such as blocking, muting and content filtering so women are easily able to curate a less toxic and harmful online experience.

Twitter has introduced a number of security and privacy features to help users protect themselves from violence and abuse. Users can block accounts, mute notifications or conservations, and filter out tweets containing specific language they prefer not to see. It is important that Twitter not only develops such features but also ensures that they are accessible and easy for users to utilize. Twitter should pay particular attention to educating and empowering users who may be targeted with abuse on the basis of their gender or other forms of their identity. However, it is important to stress that Twitter must pay sufficient attention to both equipping users to confidently use security and privacy features and also consistently enforcing the Twitter Rules to avoid placing undue burden on users to keep themselves safe on the platform.

Ensuring Free Expression for Everyone

Ensuring that women can express themselves online on the basis of equality may require restricting some forms of expression. It is crucial that states and companies ensure that these efforts do not result in unlawful censorship.

The right to freedom of expression may legitimately be subject to restrictions, provided that such restrictions comply with the requirements of international human rights law, including that they serve a legitimate purpose, are provided by law and are necessary and proportionate. Many forms of violence and abuse against women, such as direct threats of physical or sexual violence, are widely considered to be illegal in many domestic systems, and this is generally consistent with the right to freedom of expression. Where such acts are considered crimes under national law, companies and respective governments must work together to address this. Twitter should also ensure that in so doing, they do not reveal sensitive user data except in response to valid court orders that comply with international human rights law.

Companies have a responsibility to respect free expression, which encompasses expression which may be offensive or disturbing. The International Covenant on Civil and Political Rights, for example, requires states to prohibit – though not necessarily through the criminal law – only “any advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence.” Many other forms of expression, even those which shock or offend, may not lawfully be restricted.

This means that not all forms of online abuse against women may be legitimately subject to criminal sanctions or take-down measures. Laws or policies aimed at so-called “hate speech” must be carefully crafted to ensure they do not lead to unlawful censorship, including of the very groups they may seek to protect. This means, among other things, that laws or policies that restrict the right to free expression must not be overly vague, but rather, “must be formulated with sufficient precision to enable an individual to regulate his or her conduct accordingly.” Additionally, people whose content is taken down or otherwise restricted must be given clear reasons why and a meaningful opportunity to appeal against these restrictions.

Even forms of violence and abuse against women online that may be not be lawfully prohibited, can negatively impact on women’s human rights. As a result, social media companies like Twitter have varying levels of responsibility depending on the degree of violence or abuse against women occurring on their platforms.

However, the imposition by states of legal liability for companies who fail to remove abusive content sets a dangerous precedent and risks causing more harm instead of addressing the core of the issue. Such penalties risk unintended consequences such as the overbroad application of existing company policies to avoid liability which can lead to negative repercussions for the right to freedom of expression of all individuals, including the censorship of legitimate expression. In this case, attempting to solve one freedom of expression issue but creating another is simply not the answer.

The Duties of States Under International Law

While private companies have responsibilities under human rights law, states are considered the primary duty bearers. Specifically, states are obliged to respect, protect and fulfil all human rights for everyone. This means that states must not only refrain from interfering with the exercise of human rights, but must also protect the exercise of rights from interference by private parties, and take proactive measures to ensure the enjoyment of human rights.

These obligations include the duty not only to tackle violence and abuse online, but to address the underlying causes of such abuse, including by ensuring the right to non-discrimination in the enjoyment of all human rights. This task should be addressed with a broad set of policy initiatives aimed at promoting minority and under-represented voices, fostering tolerance and understanding and condemning discrimination and intolerance wherever it arises. Overall, states must ensure that there are adequate laws in place to prevent and end online violence and abuse against women and must also combat negative and harmful gender stereotypes against women that contribute to the manifestation of violence and abuse against women online.

Amnesty International’s online poll shows that many women believe their governments have a lot more work to do on this issue: in the eight countries polled, half of all women polled stated the current laws to deal with online abuse or harassment were inadequate. In the UK and USA, around 1 in 3 women stated the police response to abuse and harassment online was inadequate.

[1] Transparency is also an important component of the right to remedy for those users whose content is taken down or otherwise censored, and allows for more meaningful comparisons to be made as to how social media platforms respond to different types of reports, such as those by governments, or related to different topics and thus help avoid a resort to overly restrictive policies around certain types of content.

Case Studies