‘The Great Hack’: Cambridge Analytica is just the tip of the iceberg

It was the scandal which finally exposed the dark side of the big data economy underpinning the internet. The inside story of how one company, Cambridge Analytica, misused intimate personal Facebook data to micro-target and manipulate swing voters in the US election, is compellingly told in “The Great Hack”, a new documentary out today.

One of the most urgent and uncomfortable questions raised in The Great Hack is: to what extent are we susceptible to such behavioural manipulation?

Joe Westby

But as the former CEO of the now-defunct Cambridge Analytica tells the film-makers, this is “not just about one company”. The film goes further to open our eyes to the way our lives are constantly monitored – and controlled – through digital technology. And it goes to the heart of how far the entire business model of some Big Tech companies may be deeply threatening our human rights.

In the online and digital world, everything you do leaves a trace of “data exhaust” – a record of everything, from what time you put petrol in your car, to what websites you visited. When combined, even seemingly innocuous data points can reveal a LOT about a person.

Cambridge Analytica bragged that it had up to 5000 data points on every US voter. By applying “psychographic” analytics to its dataset, it claimed to be able to determine people’s personality type and then individually micro-target messages to influence their behaviour. The most important source of the data was Facebook. Via a third-party app, Cambridge Analytica improperly obtained data from up to 87 million Facebook profiles – including status updates, likes and even private messages.

But the incident was not an aberration: it was an inevitable consequence of a system founded on harvesting and monetising our information – the business model that academic Shoshana Zuboff dubs “surveillance capitalism”. The model’s fundamental characteristics are: aggregating vast amounts of data on people, using it to infer incredibly detailed profiles on their lives and behaviour, and monetising it by selling these predictions to others such as advertisers. Cambridge Analytica simply deployed the same basic model to target voters rather than consumers.

This model has become core to the data economy, and underpins a complex ecosystem of tech companies, data brokers, advertisers and beyond. But it is the model’s pioneers Google and Facebook that have unparalleled access to tracking and monetising our lives, by controlling the primary gateways – outside China – to the online world (between them Google Search, Chrome, Android, YouTube, Instagram and WhatsApp).

Facebook and Google have amassed data vaults with an unprecedented volume of information on human beings. This goes far beyond the data that you choose to share on their platforms to include the vast amounts of data tracked as you engage with the digital world.

Joe Westby

Facebook and Google of course have long affirmed their commitment to respecting human rights. But increasingly, we are being forced to ask whether the internet’s surveillance model itself inherently conflicts with our human rights.

Facebook and Google have amassed data vaults with an unprecedented volume of information on human beings. This goes far beyond the data that you choose to share on their platforms to include the vast amounts of data tracked as you engage with the digital world. Mass corporate surveillance on such a scale threatens the very essence of the right to privacy. Indeed, in 2010, Facebook CEO Mark Zuckerberg famously admitted that social networking had already changed privacy as a “social norm”.

But harvesting the data is only the first part of the story. The next step is using sophisticated analytics powered by machine learning to profile people – and thereby influence their behaviour. In the furore over Cambridge Analytica, Facebook’s own profiling practices largely escaped scrutiny. The company has explored personality profiling, how to manipulate emotions, and target people based on psychological vulnerabilities such as when they felt “worthless” or “insecure”. Google developed a tool to target ads so precisely that they can sway people’s beliefs and change behaviour through “social engineering” – while initially developed to counter Islamic extremism, the tool is publicly available for anyone to (mis)use.

One of the most urgent and uncomfortable questions raised in The Great Hack is: to what extent are we susceptible to such behavioural manipulation? Ultimately, if these capabilities are as powerful as the companies and their customers claim, they pose a real threat to our ability to make our own autonomous decisions or even our right to opinion, undermining the fundamental value of dignity that underpin all human rights. Advertising and propaganda aren’t new, but there is no precedent for targeting individuals in such intimate depth, and at the scale of whole populations.

The push to grab users’ attention and to keep them on platforms can also encourage the current toxic trend towards the politics of demonization.

Joe Westby

The model may also be helping to fuel discrimination. Companies – and governments – could easily abuse data analytics to target people based on their race, ethnicity, religion, gender, or other protected characteristics. The push to grab users’ attention and to keep them on platforms can also encourage the current toxic trend towards the politics of demonization. People are more likely to click on sensationalist or incendiary material, meaning platforms systematically privilege conspiracy theories, misogyny, and racism.

What is to be done? The data-driven business model presents a systemic and structural issue that will not be easy to address and requires a mix of political and regulatory solutions. Stronger data protection is certainly part of the answer: properly enforcing Europe’s General Data Protection Regulation, which has international reach, and using it as a model in other countries, would mitigate the extent of data-mining and profiling.

More radical calls to break up the Big Tech companies have now become commonplace, and the industry is already being examined by competition authorities in various jurisdictions. A recent decision by Germany’s Federal Cartel Office to limit data sharing and aggregation between Facebook and WhatsApp is an example of a precise measure to counter the concentration of power towards the big players.

Whatever regulatory tools are deployed, it is vital that they are grounded in an analysis of the human rights risks posed by the model. Human rights provide the only international, legally binding framework that can capture the multi-faceted ways in which the business model is impacting our lives and what it means to be human – and hold the companies to account.

What is clear is that current efforts are not tackling the root causes of the problem. Two weeks ago, US regulators approved a record $5bn settlement against Facebook over Cambridge Analytica. But after news of the fine broke, Facebook’s share price went UP.

The lesson: the company and its investors would be happy for this to remain an isolated incident. It will pay some relatively nominal fines – $5bn is a drop in the ocean for a company that makes $22bn in pure profit a year – and make a few improvements to their privacy protections, but then go back to business as usual.

We cannot let this happen. It is high time to confront the human rights impacts of “surveillance capitalism” itself.