Meta’s Human Rights Report ignores the real threat the company poses to human rights worldwide

by Alia Al Ghussain, Campaigner, Amnesty Tech

Last week, Meta released its first annual Human Rights Report, covering all the company’s platforms and products including Facebook, Instagram and WhatsApp. 

Meta – beleaguered by a wave of scandals following shocking revelations from the leaked “Facebook Papers” – says the report is inspired by the requirement under the UN Guiding Principles on Business and Human Rights that companies “know and show” that they respect human rights.   

However, the Human Rights Report  appears to be a cursory and selective analysis of the company’s human rights impacts, conspicuously failing to disclose any information on the most pressing threats the company is posing to human rights worldwide.  

What about the business model? 

The Human Rights Report makes no mention of the root cause of Meta’s systemic threat to human rights – the  surveillance advertising business model, which drives the company to collect ever more – and ever more personal – data on users, to then sell targeted ads.  

As Amnesty International highlighted in the 2019 Surveillance Giants report, Meta’s business model represents a serious threat to human rights including the rights to privacy, freedom of expression and non-discrimination. This is a glaring omission in Meta’s assessment – indeed, the business model is the core problem regarding the company’s adverse impacts on human rights across the globe.  

Many of the human rights harms caused by the company stem from their need to maintain user engagement and keep people on their platforms. It may be a tough pill to swallow, but Meta simply cannot claim to be serious about human rights – particularly the right to privacy – whilst continuing to track users across the internet and in their day-to-day lives.  

Absence of algorithms 

Similarly, the report makes no mention of how the content-shaping algorithms on Meta’s social media platforms actively amplify harmful content including hate and discrimination, to keep people engaged and drive further advertising profit.  

The design of these algorithms means that Meta risks contributing to ethnic violence across the world. UN experts have already found that the Facebook platform played a role in the dehumanisation of Rohingya Muslims in Myanmar in the lead up to, and during, the campaign of violence led by the Myanmar military in 2017.  

In 2021, Meta whistleblower Frances Haugen accused the platform’s algorithms of “literally fanning ethnic violence” in places like Ethiopia.  

Given that the risks these algorithms present is greatly heightened in the Global South where Meta-owned platforms such as Facebook are synonymous with the internet, it is extremely concerning that they have not been included in Meta’s Human Rights Report.  

Lack of transparency on India 

Meta published only a summary of the long-awaited independent Human Rights Impact Assessment (HRIA) on India – an unacceptable decision, given that civil society groups, including Amnesty International, have repeatedly called for the publication of the full report.  This demonstrates a complete lack of transparency regarding the human rights risks identified, and how Meta mitigates these in the Indian context. Researchers have already pointed out the role that the Facebook platform played in spreading hate in India – what we need to know is how Meta will ensure that this does not happen again.  

It has been well-documented that Meta has struggled to tackle problematic content on the Facebook platform, particularly in the Global South. However, simply ignoring the problem will not make it go away. Meta should show they are serious about learning the lessons of the past and publish the full HRIA on India.  

It is particularly important that Meta show they are learning the lessons of the past in India, the country with the highest number of Facebook users. Transparency is crucial in order to address the harms of Meta’s business model globally. Instead, it seems clear  that Meta is refusing to publish the HRIA because it does not want the wider world to know the truth about its operations in the country.  

Failure to assess and address human rights risks 

While it is welcome that Meta has made a firm commitment to respect human rights in line with international standards, the Human Rights Report fails to address some of the company’s most pressing human rights impacts. In particular, the lack of transparency on the India HRIA is an appalling attempt to whitewash Meta’s impacts in the country and a missed opportunity for the company to demonstrate a serious commitment to human rights.  

Meta must be willing to grapple with difficult questions around their business model, in order to fully address the human rights risks of their platforms. Until they are, their  alleged efforts to respect human rights  will consist more of style than substance.