Denmark: AI-powered welfare system fuels mass surveillance and risks discriminating against marginalized groups – report 

The Danish welfare authority, Udbetaling Danmark (UDK), risks discriminating against people with disabilities, low-income individuals, migrants, refugees, and marginalized racial groups through its use of artificial intelligence (AI) tools to flag individuals for social benefits fraud investigations, Amnesty International said today in a new report. 

The report, Coded Injustice: Surveillance and Discrimination in Denmark’s Automated Welfare State, details how the sweeping use of fraud detection algorithms, paired with mass surveillance practices, has led people to unwillingly –or even unknowingly– forfeit their right to privacy, and created an atmosphere of fear. 

“This mass surveillance has created a social benefits system that risks targeting, rather than supporting the very people it was meant to protect,” said Hellen Mukiri-Smith, Amnesty International’s Researcher on Artificial Intelligence and Human Rights. 

“The way the Danish automated welfare system operates is eroding individual privacy and undermining human dignity. By deploying fraud control algorithms and traditional surveillance methods to identify social benefits fraud, the authorities are enabling and expanding digitized mass surveillance.” 

This mass surveillance has created a social benefits system that risks targeting, rather than supporting the very people it was meant to protect.

Hellen Mukiri-Smith, Researcher on Artificial Intelligence and Human Rights

UDK has mandated a company, Arbejdsmarkedets Tillægspension (ATP), to administer social benefit and fraud control efforts. In turn, ATP has partnered with private multinational corporations, including NNIT, to develop fraud control algorithms tailored to ATP’s specifications.   

‘Sitting at the end of a gun’

UDK and ATP use a system of up to 60 algorithmic models purportedly designed to detect social benefits fraud and flag individuals for further investigations by Danish authorities. During its research, Amnesty International obtained partial access to four of these algorithms. 

To power these fraud-detection algorithms, Danish authorities have enacted laws that enable extensive collection and merging of personal data from public databases of millions of Danish residents. 

The data includes information on residency status and movements, citizenship, place of birth, and family relationships — sensitive data points that can also serve as proxies for a person’s race, ethnicity, or sexual orientation. 

“This expansive surveillance machine is used to document and build a panoramic view of a person’s life that is often disconnected from reality. It tracks and monitors where a social benefit claimant lives, works, their travel history, health records, and even their ties to foreign countries,” said Hellen Mukiri-Smith. 

UDK argues that vast collection and merging of personal data to detect social benefits fraud is ‘legally grounded’. However, Amnesty International’s findings show that the enormous amounts of data that is collected and processed is neither necessary nor proportionate. 

Individuals interviewed by Amnesty International described the harmful psychological toll of being subjected to surveillance by fraud investigators and case workers. The Chairperson of the Social and Labor Market Policy Committee at Dansk Handicap Foundation, Gitte Nielsen, said that people with disabilities who are constantly interrogated by case workers often feel depressed, and report that the constant scrutiny is “eating” away at them.   

Describing the terror of being investigated for benefits fraud, an interviewee told Amnesty International: “[It is like] sitting at the end of the gun. We are always afraid. [It is as] if the gun is [always] pointing at us.” 

Unfair, discriminatory algorithms

The discrimination perpetuated by UDK and ATP’s algorithms is happening in an environment of pre-existing inequalities — laws, rules, institutions, norms, and values — within Danish society. These discriminatory structures are embedded in the design of ATP’s algorithmic models, facilitating the categorization of individuals and communities based on difference or “othering”. 

To identify social benefits fraud in pension and childcare schemes, the Danish authorities deploy the Really Single algorithm to predict a person’s family or relationship status. One of the parameters employed by the Really Single fraud control algorithm includes “unusual” or “atypical” living patterns or family arrangements. Yet there is no clarity on what constitutes such situations, leaving the door open for arbitrary decision-making. 

“People in non-traditional living arrangements — such as those with disabilities who are married but who live apart due to their disabilities; older people in relationships who live apart; or those living in a multi-generational household, a common arrangement in migrant communities — are all at risk of being targeted by the Really Single algorithm for further investigation into social benefits fraud,” said Hellen Mukiri-Smith. 

UDK and ATP also use inputs related to “foreign affiliation” in its algorithmic models. The Model Abroad algorithm identifies groups of beneficiaries deemed to have “medium and high-strength ties” to non-EEA countries and prioritizes these groups for further fraud investigations. The research finds that this approach discriminates against people based on factors such as national origin and migration status. 

UDK in their response to Amnesty International have stated that the use of “citizenship” as a parameter in their algorithms does not constitute processing of sensitive personal information. Amnesty International disagrees with this argument by UDK and ATP, since citizenship as a parameter could reveal a person’s race, ethnicity, migration status and directly discriminates because of national origin. 

A Social Scoring System?

UDK and ATP provided Amnesty International with redacted documentation on the design of certain algorithmic systems, and consistently rejected Amnesty International’s requests for a collaborative audit, refusing to provide full access to the code and data used in their fraud detection algorithms. 

“The information that Amnesty International has collected and analysed suggests that the system used by the UDK and ATP functions as a social scoring system under the new EU Artificial Intelligence law (AI Act) — and should therefore be banned,” said Hellen Mukiri-Smith. 

UDK has rejected Amnesty International’s assessment that its fraud detection system is likely to fall under the social scoring ban of the AI Act, but without offering a sufficient explanation for its reasoning.  

Amnesty International also urges the European Commission to clarify, in its AI Act guidance, which AI practices count as social scoring, addressing concerns raised by civil society. 

The organization also calls on the Danish authorities to ensure that the fraud detection system used by UDK is not a social scoring system — until then, the system must be stopped.  

“The Danish authorities must urgently implement a clear and legally binding ban on the use of data related to ‘foreign affiliation’ or proxy data in risk-scoring for fraud control purposes. They must also ensure robust transparency and adequate oversight in the development and deployment of fraud control algorithms,” said Hellen Mukiri-Smith. 

Under various international human rights treaties, European Union laws, and national law, Denmark has a legal obligation to safeguard human rights like privacy, data protection, freedom of expression, equality, and non-discrimination.  

“These rights aren’t just the driving force of a just society, they also enable fair access to social security, healthcare, and more,” said Hellen Mukiri-Smith. 

Background

Ahead of the report’s publication, Amnesty International reached out to NNIT, but the company did not provide further information about its contractual arrangements with UDK and ATP, citing confidentiality obligations. NNIT also did not disclose information about any human rights due diligence it conducted before entering its agreement with UDK and ATP. 

The research on Denmark also builds on previous Amnesty International research on public sector automation and digitalization in the Netherlands, India and Serbia, and the resulting human rights risks and impacts of algorithmic decision-making in these jurisdictions.