Toxic Twitter – The Solution

WARNING:

This report contains descriptions of violence against women. 

Violence and abuse against women on this scale does not have to exist on Twitter. The company’s failure to adequately meet its human rights responsibilities regarding violence and abuse will continue to silence women on the platform unless Twitter undertakes, with urgency, concrete steps to effectively tackle this problem.

Summary of Recommendations:

  • Twitter should publicly share comprehensive and meaningful information about the nature and levels of violence and abuse against women, as well as other groups, on the platform, and how they respond to it.
  • Twitter should improve its reporting mechanisms to ensure consistent application and better response to complaints of violence and abuse.
  • Twitter should provide more clarity about how it interprets and identifies violence and abuse on the platform and how it handles reports of such abuse
  • Twitter should undertake far more proactive measures in educating users and raising awareness about security and privacy features on the platform that will help women create a safer, and less toxic Twitter experience.

Twitter is going to have to say whether they're for the people or they're not. Twitter has the power to change the way women and femmes are experiencing abuse, or even if we experience abuse, on their platform. After all, they are the convenors of the space and they have the power to change our experiences.

Miski Noor, US Black Lives Matter activist

Recommendations

 Amnesty International is asking Twitter to:

1. Publish meaningful data on how they handle violence and abuse. This should include:

  • The number of reports of abusive or harmful conduct Twitter receives per year. This should include how many of these reports are for directing ‘hate against a race, religion, gender, or orientation’, ‘targeted harassment’ and ‘threatening violence or physical harm’. Twitter should also specifically share these figures for verified accounts on the platform.
  • Of the aforementioned reports of abuse, the number of reports that are found to be – and not be – in breach Twitter’s community guidelines, per year and by category of abuse. Twitter should also specifically share these figures for verified accounts on the platform.
  • The number of reports of abuse it receives per year that failed to receive any response from the company, disaggregated by the category of abuse reported.
  • The average time it takes for moderators to respond to reports of abuse on the platform, disaggregated by the category of abuse reported. Twitter should also specifically share these figures for verified accounts on the platform.
  • The proportion of users who have made complaints against accounts on the platform and what proportion of users have had complaints made against them on the platform, disaggregated by categories of abuse.
  • The above information should be shared in an easy and accessible way on its platform.

2. Improve reporting mechanisms, for example by:

  • Ensuring that decisions to restrict content are consistent with international human rights law and standards, are transparent, and allow for effective appeal.
  • Adding an optional question for users who receive a notification about the outcome of any reports on whether or not they were satisfied with Twitter’s decision. Twitter should annually share and publish these figures, disaggregated by category of abuse.
  • Giving users the option to provide a limited character count of context when making reports of violence or abuse to help moderators understand why a report has been made. Twitter should eventually test user satisfaction against reports with an added context and reports without an added context.
  • Providing clear guidance to all users on how to appeal any decisions on reports of abuse and clearly stipulating in its policies how this process will work.
  • Sharing information with users who have filed a report of violence and abuse with links and resources for support and suggestions on how to cope with any negative or harmful effects.
  • Creating public campaigns and awareness amongst users about the harmful human rights impacts of experiencing violence and abuse on the platform, particularly violence and abuse targeting women and/or marginalized groups. This should include sending a notification/message to users who are found to be in violation of Twitter’s rules about the silencing impact and risk of metal health harms caused by sending violence and abuse to another user.
  • Notifying users found to be in violation of the Twitter rules when their conduct may also violate the domestic law of their country.
  • Creating public campaigns on Twitter encouraging users to utilize reporting mechanisms on behalf of others experiencing violence and abuse. This can help foster and reiterate Twitter’s commitment to ending violence and abuse on the platforms and recognize the emotional burden the reporting process can have on users experiencing the abuse.
  • Sharing and publishing the number of appeals received for reports of abuse, and the proportion of reports that were overturned in this process, disaggregated by category of abuse.

3. Provide more clarity about how abuse reports are handled, for example by:

  • Share specific examples of violence and abuse that Twitter will not tolerate on its platform to both demonstrate and communicate to users how it is putting its policies into practice.
  • Share with users how moderators decide the appropriate penalties when accounts users are found to be in violation of the Twitter Rules.
  • Share and publish the number of content moderators Twitter employs, including the number of moderators employed per region and by language.
  • Share how moderators are trained to identify gender and other identity-based violence and abuse against users, as well as how moderators are trained about international human rights standards and Twitter’s responsibility to respect the rights of users on its platform, including the right for women to express themselves on Twitter freely and without fear of violence and abuse.

4. Improve security and privacy features, for example by:

  • Create public campaigns and awareness on Twitter about the different safety features users can enable on the platform. Such campaigns could be promoted to users through various channels such as: promoted posts on Twitter feeds, emails, and in-app notifications encouraging users to learn how to confidently use various safety tools.
  • Offer personalized information and advice based on personal activity on the platform. For example, share useful tips and guidance on privacy and security settings when users make a report of violence and abuse against them. This should be tailored to the specific category of abuse users report. For example, a person reporting against targeted harassment could be advised how to protect themselves against fake accounts.
  • Clearly communicate any risks associated with utilizing security features alongside simple ways to mitigate against such risks. For example, if users are taught how to mute notifications from accounts they do not follow – the risk of not knowing about any threats made against them from such accounts should be explained alongside practical ways to mitigate against such risks (e.g. having a friend monitor your Twitter account).
  • Provide tools that make it easier for women to avoid violence and abuse, such as a list of abusive key words associated with gender or other identity-based profanity or slurs that users can choose from when enabling the filter function. An additional feature could allow users to easily share keywords from their mute lists with other accounts on Twitter.

Amnesty International is calling on States to:

  • Enact and implement adequate legislation, including, where appropriate, criminal penalties (in line with international human rights law and standards) in relation to violence and abuse against women online.
  • Prioritize and invest in capacity building and training of law enforcement bodies on relevant legislation, gender equality, the harms of online violence and abuse, and best practices to support those who have experienced online violence and abuse.
  • Invest in public awareness raising campaigns about violence and abuse online and public campaigns to promote gender equality and combat sex- and gender-based stereotypes.
  • Governments should ensure that sex and gender stereotyping online is included in comprehensive sexuality/sex and relationships education and that teachers are trained to such education
  • Invest in specialist public services for women who have experienced violence and abuse online.