Why is Twitter still not acting when it receives reports of death threats?
By Azmina Dhrodia, Researcher on Technology and Gender
Rochelle Ritchie’s account of reporting abuse to Twitter will be familiar to far too many women. When the US political analyst reported death threats sent from Cesar Sayoc’s account @hardrock2016, Twitter moderators responded saying the tweets did not violate their rules on hateful conduct.
Twitter offered no explanation as to how it reached this conclusion, or how it had decided these tweets didn’t breach its explicit ban on “wishing, hoping or calling for serious harm on a person”. The tweets included a warning to Ritchie to “hug [her] loved ones real close every time you leave home”, and photo montages made up of images of Ritchie, dismembered body parts, alligators and a news report about a teenage girl’s body which was found in a swamp.
It is this complete lack of transparency around how it enforces its rules which has led so many women to lose faith in Twitter, as documented extensively by Amnesty International. Twitter’s moderators consistently send users the message that even if they take the time to report abuse, they won't always take it seriously - even when it’s a threat of physical violence.
So in many ways, Rochelle Ritchie’s experience was typical. Twitter may not have given @hardrock2016 another thought, had it not emerged that Sayoc was the man suspected of sending pipe bombs to several prominent Democrats in the United States including Hilary Clinton and Barack Obama.
This is the terrifying reality that women have been trying to make Twitter understand for so long – violence and abuse online can manifest into violence and abuse offline. For far too long Twitter has refused to take meaningful action to make its platform a safer place, turning a blind eye to the abuse that women users in particular are subjected to.
Twitter now says it has “made a mistake” and said it is “deeply sorry for that error” - although it did not apologize directly to Rochelle Ritchie. How did it make this mistake? Well, we don’t know, because Twitter doesn’t make public any information about how many moderators it employs or how it trains them to interpret its own rules on hateful conduct.
Amnesty International has documented again and again how Twitter’s reporting process has failed women. In 2017, 43% of women Twitter users we polled in the UK, and 22% in the USA, stated that the company’s response to abuse or harassment was inadequate. We compiled scores of cases where explicitly racist, sexist or threatening content was reported to Twitter and judged to be in compliance with the rules, leaving women scared for their safety and that of their families. In the same poll, we found that 44% of women polled felt in fear of physical safety when receiving abuse or harassment on social media.
This should come as no surprise. In our hyper-connected era we can not always separate online and offline spheres. Hatred and violence thrives in online spaces and it’s becoming increasingly common to find that people who carry out violent attacks have also used the online world to make threats of physical and sexual violence.
To be clear: social media companies are not law enforcement agencies. But the ever-growing links between online and offline behaviours mean that, when people do report threats and abuse to social media platforms, they have a responsibility to take these threats seriously, and when such acts are crimes under national law, companies and respective governments must work together to address this.
Sadly, Rochelle Ritchie’s experience shows that despite increased attention on Twitter’s toxic reputation, very little has actually changed when it comes to users’ experience of reporting abuse.
If Twitter is serious about tackling this issue for once and for all, they need to invest adequate resources, increase transparency and listen to the people who have evidence showing how they are failing their users time and time again. Women have the right to use Twitter equally, freely and without fear.