Illustration of a screen with a sketched portrait in the middle. Scrawls of paperwork surround the portrait to reflect the merging of data and information.

India/Global: New technologies in automated social protection systems can threaten human rights  

Governments must ensure automated social protection systems are fit for purpose and do not prevent people eligible for welfare from receiving it, Amnesty International said today as it published a technical explainer on the underlying technology behind Samagra Vedika, an algorithmic system that has been used in India’s Telangana state since 2016. 

The technical explainer sheds light on Samagra Vedika’s human rights risks and its use of a technical process called “entity resolution” wherein machine-learning algorithms are used to merge databases together, with the goal of assessing the eligibility of welfare applicants and detecting fraudulent and duplicate beneficiaries in social protection programmes. 

Automated decision-making systems such as Samagra Vedika are opaque, and they flatten people’s lives by reducing them to numbers using artificial intelligence (AI) and algorithms. In a regulatory vacuum and with no transparency, investigating the human rights impacts of such systems is extremely challenging.

David Nolan, Senior Investigative Researcher, Amnesty Tech

Publication of the technical explainer follows media reports blaming Samagra Vedika for allegedly excluding thousands of people from accessing social protection measures, including those related to food security, income, and housing. A 2024 investigation published in Al Jazeera exposed how errors in the system, which consolidates individuals’ data from several government databases, led to thousands of families being denied vital benefits, raising serious human rights concerns around their right to social security.  

“Automated decision-making systems such as Samagra Vedika are opaque, and they flatten people’s lives by reducing them to numbers using artificial intelligence (AI) and algorithms. In a regulatory vacuum and with no transparency, investigating the human rights impacts of such systems is extremely challenging,” said David Nolan, Senior Investigative Researcher at Amnesty Tech.  

Samagra Vedika uses a technical process called “entity resolution” wherein machine-learning algorithms are used to merge databases together, with the goal of assessing the eligibility of welfare applicants and detecting fraudulent and duplicate beneficiaries in social protection programmes. Illustration by: Luisa Balaban

The use of entity resolution represents a new class of welfare technology, in which complex processes, often incorporating AI and machine-learning, systematically compare pairs of records on individuals across large datasets to determine whether they are a match or not, to then assess an applicant’s eligibility against various criteria, and detect fraudulent or duplicative beneficiaries. 

Amnesty International dedicated a year to designing – and attempting to carry out – an audit of the Samagra Vedika system. Despite these efforts, the audit remains incomplete due to challenges in accessing the underlying system and a blanket lack of transparency from the developers and deployers of this system. Nevertheless, through embarking on this process, Amnesty International uncovered key methodological learnings and insights into the nascent field of algorithmic investigations. By sharing these, Amnesty International aims to enhance the collective capacity of civil society, NGOs, and journalists to conduct future research in this field. 

“Governments must realize that there are real lives at stake here,” David Nolan said. 

“The external procurement of these systems by governments from private companies increases the barrier for civil society and journalists to investigate the technical makeup of digitalized social protection. As a result, private and public actors who are responsible for the design and implementation of these automated tools escape accountability, whereas people impacted by these systems get stuck in a bureaucratic maze, with little to no access to remedy.” 

The case of Samagra Vedika in Telangana is emblematic of governments increasingly relying on AI and automated decision-making systems (ADMs) to administer social protection programmes. This trend often leads to unjust outcomes for already marginalized groups, such as exclusion from social security benefits, without adequate accountability or transparency and remedy. 

It is imperative for all states to conduct thorough human rights impact assessments before introducing technology into social protection systems. It is crucial that the introduction of any technology is accompanied by adequate and robust human rights impact assessments throughout the lifecycle of the system, from design to deployment, and effective mitigation measures as part of a human rights due diligence procedure.  

Governments must realize that there are real lives at stake here.

Engagement with affected communities is essential, and any changes to vital support systems must be communicated clearly and accessibly. Ultimately, if a system is found to pose significant risks to human rights which cannot be sufficiently mitigated, it should not be deployed. 

Background

The technical explainer follows and independently builds on an investigation published in 2024 on Al Jazeera, in collaboration with the Pulitzer Center’s Artificial Intelligence (AI) Accountability Network. This investigation revealed a pattern of flawed implementation in the Samagra Vedika system, resulting in the arbitrary denial of welfare access to thousands.  

Amnesty International wrote to Posidex Technologies Private Limited – the private company that provides the entity resolution software upon which the Samagra Vedika system relies – in advance of the publication of this technical explainer. Amnesty International has not received a response at the time of publication. 

In 2023, Amnesty International’s research, Trapped by Automation: Poverty and Discrimination in Serbia’s Welfare State, documented how many people, particularly Roma and those with disabilities, were unable pay bills, put food on the table, and struggled to make ends meet after being removed from social assistance support following the introduction of the Social Card registry.  

In 2021, Amnesty International documented how an algorithmic system used by the Dutch tax authorities had racially profiled recipients of childcare benefits. The tool was supposed to ascertain whether benefit claims were genuine or fraudulent, but the system wrongly penalized thousands of parents from low-income and immigrant backgrounds, plunging them into exorbitant debt and poverty.