Xenophobic machines: Discrimination through unregulated use of algorithms in the Dutch childcare benefits scandal

Social security enforcement agencies worldwide are increasingly automating their processes in the hope of detecting fraud. The Netherlands is at the forefront of this development. The Dutch tax authorities adopted an algorithmic decision-making system to create risk profiles of individuals applying for childcare benefits in order to detect inaccurate and potentially fraudulent applications at an early stage. Nationality was one of the risk factors used by the tax authorities to assess the risk of inaccuracy and/or fraud in the applications submitted. This report illustrates how the use of individuals’ nationality resulted in discrimination as well as racial profiling.

Choose a language to view report

Download PDF