Overview

Human rights are more critical than ever in the digital age. They let people express themselves online, share opinions, create, use, and access technology in a safe and just manner.

Technology can be used as an avenue to exercise freedom of expression, hold governments to account and expose wrongdoing.

However, technology can also pose a systemic threat to human rights by reproducing structural discrimination against marginalized groups in policing, migration, social welfare, and access to health care.

Surveillance tools also adversely affect already marginalized groups in many ways across the world.  

Our mission is to respond to emerging threats facing human rights in the digital era and help shape and protect rights for the future.

We hold Big Tech to account, while confronting the digital and automated state, and fighting against the increasing government surveillance and censorship enabled by these technologies.

The digital and automated state refers to how governments are increasingly using technology to make vital decisions in contexts of welfare, migration and asylum management systems.

We envision a future where technology enables the protection of human rights and an equitable society.

In the digital age, human rights enable just participation in digital environments in a safe, accountable, and inclusive manner, guaranteeing freedom of expression, privacy, equality, anti-discrimination and data protection.

We increasingly conduct our lives online and our rights must be protected.

How technology impacts society

As technological developments continue at an exponential pace, and with technology companies expanding in power and global reach, it is more important than ever to critically examine who is best served, and who is harmed, through these developments.  

Fabian Rogers is an activist in New York City who, together with the other tenants in his building, successfully resisted attempts to instal facial recognition cameras in their housing complex.

Big Tech Accountability 

Big technology companies wield tremendous power over our lives. Our increasing reliance on Big Tech has allowed a handful of companies to undermine our privacy and track our every move, reshaping our interactions and creating predictions about our behaviour for their economic gain.

Their surveillance-based business model poses a systemic threat to human rights, including children’s rights. We have little control over what information corporations and governments gather about us, how they use that information, and with whom they share it.

With the growing scourge of misinformation, proliferation of hate and racism online, big technology companies must do more to respect human rights.  This is why we campaign for regulatory measures to force big tech firms to abide by their human rights responsibilities.

Algorithms are a process, set of rules, or instructions that enable a computer programme to put together different sources of information and generate results. Algorithms are also used to power social media platforms determining what content users get to see.

They are everywhere in different spheres of our lives. For example, they are being used in managing welfare systems, policing and criminal justice systems.

An Amnesty report outlines how Meta contributed to serious human rights abuses against the Tigrayan community in northern Ethiopia. The report revealed that Meta ignored warnings from civil society about how it did not conduct due dilligence to prevent inflammatory content that incited violence, hate and discrimination.

Case Study: Meta’s role in ethnic cleansing of the Rohingya

In September 2022, Amnesty published a major investigation into how Meta amplified hatred and discrimination against the Rohingya in Myanmar, and an international campaign for Meta to provide reparations to the Rohingya community. Amnesty’s report, The Social Atrocity, is a first-of-its kind, in-depth human rights analysis establishing Meta’s contribution to the serious atrocities perpetrated against the Rohingya in 2017.

It revealed how Facebook’s algorithms were designed to maximise “user engagement” at all costs, resulting in anti-Rohingya hate and misinformation being disproportionately distributed and amplified across the Facebook platform – thereby helping to create the enabling environment for the horrific mass violence which ensued. The story of Rohingya refugee, Maung Sawyeddollah personifies the effects of Meta’s contribution to the proliferation of hate and discrimination.

We also monitor the ways in which governments are weaponizing Big Tech platforms like Facebook to censor and silence human rights defenders, as we showed in our 2020 report on Vietnam.  

Case Study: TikTok and Children’s mental health

Amnesty International has documented the systemic harms of the surveillance-based business model of TikTok, highlighting the platform’s abuse of the right to privacy, its addictive design and the role it plays in worsening children and young people’s mental health issues.

Our two reports –Driven into the Darkness: How TikTok Encourages Self-harm and Suicidal Ideation and the I Feel Exposed: Caught in TikTok’s Surveillance Web – highlight the abuses experienced by children and young people using TikTok, and the ways in which these abuses are caused by TikTok’s recommender system and its underlying business model. 

Graphic of a mobile phone with a blue figure on the screen, sitting with their head bowed. The phone is melting into a purple TikTok wormhole.
Image depicting abuses on TikTok experienced by children and young people.

Surveillance and Privacy 

Across the world, we are also seeing the misuse of advanced technologies to spy on, harass and intimidate human rights defenders (HRDs), including youth activists, and dampen social movements.

Beyond examining problematic technologies, themselves, we must challenge the social, economic, and political conditions that allow or encourage the misuse of technology by powerful actors, and their impact on specific populations, such as women, racial, ethnic and religious minorities, children, LGBTQIA+ people, and those affected by poverty and inequality.

Technology must facilitate access to information and not censor it as witnessed in the United States where tech companies have been accused of facilitating online censorship of reproductive rights content.

Access to critical information on reproductive health and rights including medical abortion is everyone’s right.

A protester at an demonstration to challenge U.S. security agencies’ reliance on the use of surveillance.

Types of Surveillance  

  • Facial recognition is a way of identifying or confirming an individual’s identity using their facial features via image or video. 
  • Biometric Surveillance is the use of human characteristics to identify individuals. Common forms of biometrics are fingerprints scanners and face identification. 
  • AI-powered surveillance is the use of artificial intelligence and surveillance technology to monitor, track, analyse and identify people, places and objects.  

AI-powered surveillance

AI-powered surveillance can be deployed by governments and corporate entities for various reasons some of which may have implications on human rights such as public monitoring.

There is therefore an urgency in providing detailed recommendations and guidance for the use of such technologies. Our investigations also provide further detailed evidence to support the need to ban the use of technologies when they are inconsistent with human rights.

We will continue to track, investigate, and advocate for stronger protections in the design and use of new and frontier technologies including artificial intelligence/machine learning (ML), biometric systems including facial recognition technology and emerging “smart cities” or digital urban infrastructures.

Over 1 million Amnesty supporters from around the world wrote letters in solidarity with Edward Snowden, a whistleblower and former intelligence officer who was pivotal in uncovering the shocking extent of global mass surveillance.

Case Study: Ban the Scan

In January 2021, Amnesty Tech launched the Ban the Scan campaign which seeks to ban facial recognition technologies (FRT) for identification purposes.

As part of the campaign, we published case studies on the impact of the use of FRT in New York City and how it disproportionately impacts people of colour and threatens the right to peaceful protest. We also launched the Decode Surveillance NYC project, which used thousands of digital volunteers from all over the world to map CCTV and other public cameras that can be used with facial recognition software across New York City. 

The Ban the Scan campaign subsequently expanded to include police abuse of FRT in Hyderabad, India and in East Jerusalem and Hebron in the Occupied Palestinian Territories with the publication of Amnesty’s Automated Apartheid report.

Video highlighting the use of facial recognition technology and racist policing

Targeted surveillance of activists 

Law enforcement, military and intelligence agencies are the top users and clients of spyware and surveillance companies.

They may want to find out specific information like someone’s whereabouts, a journalist’s sources, details of protests being organised, information someone may have on corruption and evidence of criminal behaviour.

Governments argue that they use these surveillance tools to target “criminals and terrorists”, but the reality is that human rights activists, journalists and government critics are most commonly targeted by spyware such as Predator. Unlawful use of spyware violates many human rights, like the right to privacy, and the rights to freedom of expression, opinion, assembly, and association.

But spyware harms different people in different ways. Data can be weaponised and lead to more abuse, online and offline – especially for people who already face discrimination based on their identity. This abuse can take many forms, like blackmail, doxxing, cyber-stalking, harassment, and intimidation.

Spyware is a type of malicious software. It interferes with a device’s normal operation to collect information without alerting the user and sends it another unauthorised entity.

Highly invasive spyware allows unlimited access to a device by default. It leaves little to no trace, so for the user it’s almost impossible to know what data was taken.

Case Study: NSO Pegasus

Our investigation shows how NSO Group’s Pegasus spyware has been used to target activists, journalists and political leaders around the world.

It shows it can be surreptitiously installed on victims’ phones, allowing an attacker complete access to the device’s messages, emails, media, microphone, camera, calls and contacts.

The investigation established that Pegasus was used by Saudi Arabia operatives– although they deny it – to target family members of the slain Saudi Journalist Jamal Khashoggi, one of the many victims of Pegasus, before and after his murder in Istanbul on 2 October 2018 by the Saudi operatives.

Amnesty International members and supporters demonstrate against the presence of the NSO Group at the International Security Expo in London, 28 September 2021.

State automation and Artificial Intelligence

Today Artificial Intelligence (AI) is defining a new era of technology in the same way that the internet was the defining era of the past. AI is being incorporated into nearly every single aspect of our daily lives, shaping and mediating everything from manufacturing and financial services to healthcare and the economy.

How are states using automation and algorithmic systems?

Governments are increasingly using algorithms in decision making processes. For example, in contexts of health care delivery, social welfare, policing, and migration.

This poses a variety of risks and potential harmful impacts on human rights, including for the right to equality and non-discrimination, as well as a range of other economic and social rights such as the right to education and health. Transparency and accountability measures are required to safeguard human rights and mitigate against the potential harms that might be caused by this technology.

What is algorithmic bias?

Algorithmic bias occurs when algorithms make decisions that systematically disadvantage certain groups of people.

It can result in for example predicting the likelihood a crime being committed by a particular person or in a particular place. These people or the identified places are put under the gaze of the police which could result in discrimination of targeted people.

Case Study: Poverty and Discrimination in Serbia’s Welfare State

Trapped by Automation: Poverty and Discrimination in Serbia’s Welfare State  documents how poverty-stricken and marginalised communities in Serbia are being pushed further into poverty because an automated welfare delivery system, funded by the World Bank, which strips them of social assistance.

Roma communities and persons with disabilities are the most impacted.

An illustration on a dark blue background of a holographic, neon-green hand pointing at a person.

Emerging technologies will continue to transform our societies.

Governments are increasingly adopting the use of Artificial Intelligence tools in decision making processes in the public sector. This will affect people’s lives and rights.

We at Amnesty are focused on identifying the potential human rights implications of emerging technologies like Generative AI, and then engaging with both governments and companies to ensure that these technologies are developed and deployed in a rights-respecting manner.

New technologies should be developed and deployed with robust accountability guardrails and policy frameworks to protect human rights everywhere in the world. Technology must serve humanity’s best interests and not just the interests of the few rich and powerful owners of companies developing these technologies.

Governments have the responsibility to ensure businesses act responsibly when investing in technology.

What is Amnesty doing to hold governments and technology companies accountable?

We believe it’s time technology puts people and human rights first.  

To do this we investigate. We campaign. We lobby for changes in policy. We fight for justice. We hold the powerful to account. We’re setting the agenda for the future of human rights and technology to:  

  • Rewire the internet to put people and human rights first, ensuring Big Tech is effectively regulated to protect human rights, and companies are more transparent and more accountable.  
  • Resist the use of technology to silence activists, preventing unlawful surveillance, censorship and internet shutdowns.  
  • Rewrite the code to protect human rights, ensuring discrimination and bias are no longer programmed into algorithms that make life-changing decisions about us. 

Our theory of change is that we can limit both government and corporate abuse of technology by first leveraging our unique action-based research and communications capacity to identify and highlight new ways in which technology threatens our rights, and then harnessing the Amnesty movement to push corporations, governments in the Global Majority World and Global North, as well as key multilateral institutions, to enact responsible, effective regulation to curb these abuses.   

Tell TikTok to stop targeted advertising aimed at younger users

Make Tiktok safer

Join our campaign to help #FixTikTok and help us call TikTok to make its platform safer for children and young people