By Anna Bacciarelli, Engagement Lead, Technology and Human Rights
Search online for AI on any given day and there will be a stack of media coverage of companies investing heavily in AI research and software, about how AI systems will revolutionize the way we work and live.
AI is certainly having its moment in the limelight. Advances in computing power, unprecedented access to massive amounts of data, plus financial investment in AI research and development have brought about phenomenal advances in the field in the past decade.
AI-powered systems have reached huge milestones quicker than many people who know about this stuff anticipated: one breakthrough being the defeat of the (human) champion of Go, an ancient Chinese game, by an AI system last year. AlphaGo, a programme from Google-owned AI company DeepMind, used skills of prediction and adaptation thought to be a decade away in terms of computing development to defeat the world’s top Go player, Lee Sedol. AlphaGo and Deep Mind did it again last month; while the programme is niche, the victory wasn’t a one-off.
What does this have to do with human rights?
We’re now at the stage where AI-enhanced software is making key decisions in our lives.
Well, if the fast-accelerating rate of development and use of AI continues as expected, the world we live in is going to look very different in the very near future. And it’s going to have a huge impact on our rights — in fact, in some cases it already is. We’re now at the stage where AI-enhanced software is making key decisions in our lives. It’s only going to play a bigger role in coming years.
Amnesty was at the AI for Good summit recently, the first global meeting of its kind to really look at the social impact of AI, how best to mitigate risks — and ideally make AI that (you guessed it) works for our rights, and works for good. What this looks like is a bunch of technologists, AI researchers, tech companies, diplomats, entrepreneurs and rights groups coming together to realize that we’re asking similar questions, and try to figure out a way to stop a worst-case scenario where our rights are steam-rolled as technology advances.
AI is built by humans and it will be shaped by human values. If we build AI systems that are a mirror to our current societies, they will be riddled with the historical biases and inequalities of our societies.
Salil Shetty, Amnesty International's Secretary General
Amnesty Secretary General Salil Shetty spoke on some of the AI-related issues we’re working on at Amnesty — investigating risks and benefits, interrogating where AI systems are already being used, and ultimately helping the people who create the systems to put human rights at the centre of their design.
The creep of automated decision-making
What do we mean when we talk about artificial intelligence?
Forget Terminator. That vision of artificial general intelligence — the singularity, when machines reach human-level intelligence — remains in the realm of science fiction.
What is advancing is narrow AI — systems designed to carry out a specific task or process that ‘learn by doing’ — whether that’s through supervised learning (a system that is rewarded and corrected by a developer until it learns patterns over time) or newer methods of deep learning (systems programmed to learn in a more sophisticated way, modeled on processes in the human brain). Right now, ‘artificial intelligence is often just a fancy name for a computer programme’.
However, computer programmes that learn are powering more and more decisions in our lives — from the relatively benign, like deciding what you see on Facebook or Netflix, to those with bigger consequences — say, recommending parole sentences.
At present, all deep learning algorithms are opaque — even the developers who built a deep learning system cannot explain why or how it arrived at a decision.
It’s important to remember that machines that learn are still built and guided by humans: a programmer sets algorithm parameters and training data, and often hand-holds the system as it learns. But what currently isn’t under control is the system output — which becomes problematic when systems are not transparent.
At present, all deep learning algorithms are opaque — even the developers who built a deep learning system cannot explain why or how it arrived at a decision. This situation is often referred to as the ‘black box phenomenon’ — the inputs and outputs are visible, but the steps a system takes to get from input to output are not.
More and more services are being automated with AI-enhanced technology designed to make decisions — marking an increase of power in the role of such software in our society. Outsourcing tasks to machines is not new — we’ve been doing that for a long time, replacing or augmenting manual tasks with telephone switchboards, calculators, or computerized manufacturing, to name a few examples. The shift is in delegating decision-making power to software, whether issuing or recommending a decision.
Even if a system is not fully automated in that it recommends a decision to a human, a lot of trust has been put in that machine (eg. in automated parole sentencing, software analyzes data to recommend a prisoner’s re-offending ‘risk score’ to a judge, who makes a decision) — when it’s not clear how such a recommendation was reached.
“We’re delegating decisions to machines and that is one of the biggest ethical questions: is it possible to draw a line, and say ‘these types of decisions should not be delegated to machines’?” — Luka Omladič, COMEST, at AI for Good
It’s not always terrible — no one is employed to hand-pick what I see on Facebook or in Apple news in the morning. The content on those feeds is served up based on my past browsing behaviour, and it’s arguably the most efficient way to put in front of me news I’m interested in — certainly more effective that employing someone to do this. (Though the filter bubbles of Facebook and the like are arguably irresponsibly fueling polarised opinions or profiling ‘fake news’.) But there is a huge risk when machines are delegated tasks of social importance, especially when those systems aren’t transparent.
- The human cost of AI-driven decisions
State benefits or healthcare being cut off without explanation. Discrimination in the workplace or in the criminal justice system. These situations certainly aren’t science fiction — they are current examples from a big data world where our increasing reliance on automated decision-making could set a pace for an ever-widening gap between those who can access their rights, and those who can’t. AI-powered tech has already been found in several cases to entrench and perpetuate discrimination and inequality.
In criminal justice, for example. A 2016 ProPublica investigation into parole sentences issued by the US state of Florida found black inmates were routinely being recommended longer parole sentences than white inmates who had committed more serious crimes, by algorithms that worked with data with a historic bias. Judges consulting automated re-offending risk scores in many cases issued a more serious punishment to those with higher risk scores — though the recommendation was flawed.
“We used to say that the internet would be a great development for free speech, but at the same time, the internet brought polarisation… We thought it was going to be a gender equalizer, but online the gender gap grew wider.”
Frank La Rue, UNESCO
Another example is in healthcare. The ACLU is currently bringing a case against the state of Idaho on behalf of 4,000 people with developmental and intellectual disabilities who had their Medicaid cut without explanation by software that was fed a non-representational batch of flawed data, ACLU data scientists discovered.
The decision-making role of software is raising lots of questions. Primarily: should we really be letting these kind of systems make important decisions if we can’t explain how or why they’re made, and where they are not fair or transparent? Who is responsible for the decisions generated by this technology, and how can those affected challenge and get justice after unfair decisions?
“We used to say that the internet would be a great development for free speech, but at the same time, the internet brought polarisation… We thought it was going to be a gender equalizer, but online the gender gap grew wider.” — Frank La Rue, UNESCO, at AI for Good
At present, AI-powered systems look like they could polarize access to services, and widen the gap between ‘haves’ and ‘have nots’ to an unforeseen degree.
- Designing AI systems that favour rights
As AI academic Yoshua Bengio pointed out at AI for Good, developers and researchers focus on creating code, they don’t necessarily think about deployment.
Facial recognition software is fairly benign when helping us efficiently tag friends in Facebook photos, but becomes far more sinister when used to police protests or for government mass surveillance programmes
It’s our job at Amnesty to think about the human rights use-case of software. Facial recognition software is fairly benign when helping us efficiently tag friends in Facebook photos, but becomes far more sinister when used to police protests or for government mass surveillance programmes, for example.
We need the people creating new technology to really consider potential ulterior use — AI for bad, if you like — and not develop or deploy systems that pose a huge threat to our rights. We’re calling for an all-out ban on the development of automated weapons systems — also known as killer robots — for example.
We want to work with technology experts, AI researchers and those using AI-powered software to create human rights principles for AI: design and policy recommendations that we hope will help those creating and developing AI-enhanced technology to ensure that it protects rights.
We’re keen to work to understand the risks to rights and develop principles that are practical and relevant for those designing and deploying AI-enhanced systems.
First off, we need to address a few big challenges to make AI tech work for rights:
- Promote fairness
We need to address the huge problem of bias in historic data, and software that augments existing bias.
- Be fully transparent
If deep learning systems can’t be interrogated, should they really be deployed to make critical decisions in the first place? There’s also the issue of transparency of application for the user — should we be alerted when a decision is fully or part-automated?
- Support accountability
Who is responsible when a system does not function as its supposed to? (That is presuming that we can interrogate a system to expose its errors!) If AI systems are going to be key decision-makers in our society, how do we establish oversight and accountability for them? Does responsibility for the outcomes lie with software developers, or the companies or governments using them?
- Really work for equality
Something I haven’t covered here is the issue of who is building this software, and for whom. Much as data discrimination could entrench inequality, so could limited or no access to advanced technology. Those who design and own the technology that powers systems in our society have much more power than those who don’t. We need to see gender, ethnic and geographical diversity properly addressed as AI develops.
- Rights-first AI
The development and roll-out of AI systems brings such huge implications for rights. We need technology companies to understand the potential human cost of the tech they’re developing, and take the risks seriously — before that tech influences lives. There is an opportunity to develop ‘AI for good’ — we just need to make sure it’s for the good of all of us, rather than a select few.