Ethical AI principles won’t solve a human rights crisis

I recently returned from RightsCon, the huge annual global meeting on tech and human rights, with keynote speaker Zeynep Tufekci’s words ringing in my ears: “Digital rights, privacy rights, access rights, these are not optional rights — they are fundamental rights”; “We shouldn’t have to beg, plead and become technical wizards to exercise our fundamental rights”. Tufekci called on tech companies to make products that adhere to human rights by default, not as an afterthought. And she’s right.

While many of the tech multinationals were present at and participating in the conference, there is a depressing gulf between talking in that kind of safe space and real-world action — action that truly preserves human rights for everyone, with a real focus on the impact on the most disempowered, marginalized and vulnerable people in our societies.

Both governments and tech companies have huge roles to play in stopping technology from steamrolling human rights, yet neither is taking nearly enough action to seriously address the proliferation of risks abounding from the continuing development and roll-out of new products. Widespread adoption of machine learning and automated decision-making systems without adequate consideration for their impact is a critical threat to human rights that were hard-won, and if lost will be incredibly difficult to reclaim.

The Toronto Declaration, one year on

At last year’s RightsCon, Amnesty and Access Now launched the Toronto Declaration, a landmark statement on the human rights impact of machine learning systems on the right to equality and non-discrimination.

Created by over 30 experts from human rights and civil liberties groups, academics, lawyers and tech policy-makers, the Declaration is a call to action for governments and companies to each meet their existing responsibilities to safeguard human rights, as per international human rights law and standards.

In the Toronto Declaration, we simply lay out those existing structures — states have legal obligations to meet and companies have a responsibility to work within the human rights due diligence framework — and translate them for the machine learning age.

The Toronto Declaration was widely endorsed by civil society, but the reception from tech companies has been decidedly cold.

Tech companies are happy to talk about human rights, less keen to take responsibility

Representatives from some tech multinationals joined us in the room for the one-day drafting marathon last year — a niche endurance event; but even those who were happy to contribute to the drafting of the Toronto Declaration and informed its wording have shied away from publicly endorsing it.

I’ve been told by tech company representatives that “it would be used against them to hold them accountable” (…to existing standards) and that “it reads like it was written by angry people”, to which I say — of course we are angry! We’re angry that despite evidence that machine learning systems can undermine human rights, development and sale of these products continues, with scant consequences for those responsible for developing and deploying them without consideration for their human rights impact. For example, we know that live facial recognition technology threatens privacy, non-discrimination, assembly and association rights (and more) — yet it’s being used in all kinds of weird and not-so-wonderful scenarios.

Companies have a clear responsibility to make sure they neither cause or contribute to human rights abuses and aren’t directly linked to a human rights abuse by a business relationship. That means going beyond lofty principles to taking concrete action.

Silicon Valley’s love affair with ethics

A wave of ethics-washing has swept Silicon Valley over the past year, as tech companies rush to understand the impact of their tech and seemingly curb against harmful use. But the companies themselves are intent on writing the rules.

Ethical guidelines are a useful tool when they are used to raise the bar above and beyond existing rights protections. The problem right now is that ‘ethics’ is being used as a mechanism to lower the bar, allowing those responsible for potential misuse to evade accountability. Ben Wagner puts it well:

“Unable or unwilling to properly provide regulatory solutions, ethics is seen as the ‘easy’ or ‘soft’ option which can help structure and give meaning to existing self-regulatory initiatives. In this world, ‘ethics’ is the new ‘industry self-regulation.’”

A few weeks after Toronto, Google launched its AI Principles. They look great: pro human rights, anti AI for use in weapons. They coalesce around rights-based concepts of fairness, privacy, social good and — crucially — accountability.

  1. Be accountable to people
    We will design AI systems that provide appropriate opportunities for feedback, relevant explanations, and appeal. Our AI technologies will be subject to appropriate human direction and control. 
    Google AI Principles

Yet this principle appears to apply only to Google’s products, rather than the company’s operations — and on its own terms. Google, like any big multinational tech company, remains woefully unaccountable when it creates and sells products that threaten to undermine human rights: the Principles didn’t stop the creation of Project Dragonfly. As with Project Maven, it was employee mobilization and criticism from Amnesty and other human rights groups that seemed to deter Dragonfly’s development last year. The brief lifespan of Google’s ethics oversight group shows that without consultation, a clear remit and accountability roadmap, potentially useful governance mechanisms will have little buy-in, internally and externally, when they lack credibility, substance and teeth.

Google is just one example. Microsoft has AI principles; Facebook is establishing an ethics in AI center at a German university; Baidu and Tencent are calling for ethical frameworks for AI. But to what extent are these ventures taken seriously by the companies who create them? With the notable exception of Microsoft’s now well-publicized decision to turn down facial recognition contracts for two potential clients over human rights concerns, companies are not publicly reporting how their ethics initiatives are stopping ‘unethical’ tech or informing development or contracting decisions.

Meanwhile, Microsoft and Google both cited ‘unethical tech’ as bad for business in their annual reports last year; Facebook is reportedly struggling to recruit following its human rights and wider existential crisis. Ethics initiatives may soothe investors worried about the financial impact of bad publicity, but without enforcement mechanisms and robust oversight, they appear little more than a marketing exercise designed to convince users that their products and systems are trustworthy.

States are playing catch-up on tech governance

Governments are ultimately responsible for protecting our human rights, yet they are struggling to curb threats from new technologies, particularly on the machine learning front. Meanwhile, many states are themselves adopting and deploying these technologies in public sector service delivery, from criminal justice functions to welfare systems.

New multi-state initiatives from the OECD and the UN’s human rights body, OHCHR, are a welcome international arrival to the AI and human rights party. Here’s hoping they can coalesce states around some robust governance initiatives — the sooner, the better. Self-regulation clearly isn’t working (and nor should it).

Human rights safeguards should be a feature, not an afterthought

In the meantime, realistically the power to curb or even halt altogether the development of tech products and systems that risk widening inequality, rolling back on privacy, threatening our rights to freedom of association, expression, remedy and so on, lies with the tech companies themselves. They are the ones leading the research and development in this field, and not only do they have a responsibility to consider the human rights impact at the start of a product life cycle, rather than as an afterthought, they have the power to prevent harm through considerate design, consultation and oversight of their products.

Companies ultimately have a responsibility to identify any risks and prevent their products or services from causing, contributing or being linked to human rights abuses. Understanding the human rights risk is where Amnesty and other rights groups come in — and, crucially, affected end users, who really ought to be consulted and listened to when systems and products are directly shaping their society and potentially limiting their human rights.

Enough with the ethics-washing. Powerful multinational companies must incorporate human rights due diligence from the start as a default. Wouldn’t it be wonderful if human rights protections were seen as an innovating feature, rather than being deemed a ‘barrier’ to ‘progress’? To make products that are truly trustworthy, creators should strive to respect and preserve the human rights that we are all entitled to, by default.