Artificial Intelligence for good
Industry and academia are working towards the next generation of computers that can understand and learn from natural spoken language, full motion video, and more. Given how much the world is already changing, what can we expect from the next generation of Artificial Intelligence systems? How will these technologies affect the world?
Amnesty International’s Secretary General Salil Shetty addresses the AI for Good Global Summit in ITU in Geneva on 7 June 2017
You have heard and you will hear a lot more in the course of these days about the amazing things that artificial intelligence could do in the future. I want to talk to you about human rights today and tomorrow – so no surprise on that front! I wanted to talk in a setting which makes it clear that as far as Amnesty is concerned there are huge possibilities and benefits from artificial intelligence –AI for Good: it is an amazing idea to have this conference. The question who is it good for? And we’re going to talk more about it in the coming period.
In the eyes of the general public and certainly within Amnesty International as well, it’s difficult for people to connect it to our bread and butter work – as you can imagine it’s freedom of expression, stopping torture, holding dictators to accounts, that’s the kind of work that Amnesty typically does - But we are equally looking more and more into rights in the workplace, access to healthcare and education, economic, social and cultural rights as well.
I want to just refer to an experience that I had which remains fresh in my mind and I will come back to that at the end of my speech. A few years ago I was in a rural part of the Czech Republic and I was taken to a school to look at the situation of Roma girls’ education. This was a Roma family and I spent some time talking to this girl – she must have been 10 or 11 years old and I think her name was Dinka. Her face is etched in my memory – you could see from her eyes how bright and how enthusiastic she was, but at the same time I remember the obstacles she had to jump in order to get into that school. Let’s remember that this is the Czech Republic – this is part of the European Union – and this is around 2015. More shocking was the sad fact that as soon as a group of Roma children had managed to get into this school, a lot of the non Roma children left because they did not want to be in the same school. So let’s keep Dinka in mind and we’ll come back to her later.
The story is relevant because I do believe that AI is at a fork in the road and we have a clear choice in front of us. And we want to be discussing these choices. One of the powerful concepts that Mahatma Gandhi had talked about is the concept of Antyodaya. Which means focussing on the benefits to the very last person in a line or society. When you have to make difficult choices, you should always ask yourself what impact it has on the last person. So how would our decisions on AI affect Dinka or a young girl in Afghanistan?
AI in a world of rising inequality
Humanity may live on one planet – at least for the time being – but we inhabit different worlds.
At one end is prosperity and wealth, technology that is continuously evolving to fulfil any need we can imagine, and the future prospects of incredibly long and healthy lifespans, space travel, and much more. At the other end is a reality of poverty and injustice that has scarcely improved despite the arrival of the internet, mobiles, big data and now AI. In between them is one of the biggest threats to the health of our societies: inequality.
If we just look at global development indicators, there is a very positive picture. Extreme poverty, undernourishment and maternal mortality are falling, while access to education and clean water is increasing – the world has achieved significant progress in the past 25 years. But at the same time, economic inequality has been rising. When I talk about economic inequality I also like to talk about voice inequality: those who are left behind have no voice.
Even in the OECD countries, including many of the wealthiest countries in the world, income inequality is at the highest level for 50 years. The average income of the richest 10% of the population is about nine times that of the poorest 10%. It’s up about seven times 25 years ago. These numbers have been thrown at you before and you will have seen them. One of the most bizarre statistics of course is one which came out earlier this year from Oxfam – is that the wealth of eight individuals is more than half of the population of the world. 3.6 billion people have as much wealth as 8 individuals.
Around the world, from Asia to America and from Europe to the Middle East, we are seeing what happens when inequality grows and is ignored – anger simmers and the politics of blame thrive. Immigrants are scapegoated, refugees rejected; discrimination against women, Muslims, black people, starts dividing the world. Inequality fuels xenophobic populism and isolationism, at a time when we need to work together to solve the global problems we face.
But growing inequality is neither an accident, not is it without consequence. Inequality and the political instability it generates are direct consequences of government policy and corporate practice. The way companies and governments handle technological innovation is a key part of this.
To take one example: one of the most important phenomena of the past few years has been the rise of the gig economy. In the UK, there are now 1.3 million people working in gig economy jobs. They have no guaranteed working hours or sick pay. They lack many of the rights that workers in regular jobs enjoy. At the same time, technology companies have made billions by providing platforms for those so-called flexible work arrangements.
Of course, businesses can and should innovate, but are they paying their fair share? If we look at growing corporate profits together with wide ranging cuts to healthcare, education and other public services, even in the West, we know that something isn’t working. Tax evasion tactics have allowed hugely profitable companies from paying very little tax, while whole job categories disappear and are replaced by low-paid, insecure jobs. This is a rule in life, if you take and take then don’t give back, people will get pissed off.
Some will argue that for business what matters is growth and profits and that this has to be their priority, but they then can’t complain of the growing disillusionment with business and government. Societies must plan for, and mitigate against the risks. This is particularly critical in the case of AI, which has the potential to create huge economic disruption.
As economic inequality grows, there is another kind of technology-fuelled inequality that’s growing, at an exponential rate – this relates to data. There is already a huge asymmetry in power between companies and some governments on one hand, and individuals and civil society on the other, as a result of the control that a handful of companies and governments exercise over unimaginable amounts of personal data. Whether it’s to sell us ads or for electronic surveillance programs, data gives formidable power to those who control it. But we’ve also heard how big data analysis and the micro-targeting of voters was used in the US elections and the Brexit referendum – data is now threatening the very concept of democracy itself. While the use of data in politics is not new, the difference is how powerful these techniques have become in a short space of time and how artificial intelligence could supercharge all this. What will happen tomorrow?
The future of AI
So what will happen tomorrow? Let’s fast forward 20 years. What world do we want in 2037? Technologies, chief among them artificial intelligence, will shape tomorrow’s world, that’s a certainty. I don’t want to build a binary picture, but let’s for the purpose of this conversation visualize two scenarios:
If we continue as we are: hundreds of millions of jobs will be lost to automation and largely replaced with insecure jobs with little protection for workers’ rights. We may have some social protection schemes but they will barely allow people to survive, with little hope for the future.
AI will be used across the board, in healthcare, education and across public services. Robocops will patrol our streets along human police officers. Wars will be fought by killer robots, reducing the human cost of war for the wealthy and powerful, but not the poor.
We already know how data-driven systems – from financial to predictive policing applications - can end up discriminating against minorities and poor people. What will it look like in 20 years’ time? AI systems may become the gatekeepers deciding who can access healthcare and who cannot, who qualifies for a job or a mortgage and who does not. Those with power and access to the fruits of the data economy – a handful of companies and countries – will be those that continue to gain, while the vast majority of people are left behind. This brave new world may entrench global inequality on a scale never before imagined. And with this will come with massive political upheaval and disruption.
But there is also the possibility of a different world. This conference and the initiatives that come from here offer us an opportunity to take a critical step in identifying principles for the ethical development and use of AI. Companies could take notice and governments could respond. In the future, we could have artificial intelligence systems that detect and correct bias in data, rather than doubling down on human bias; we have automation that takes people out of dangerous and degrading jobs, but also educational and economic policies that create opportunities for dignified and fulfilling jobs. Governments could ban fully automated weapons systems - so that killer robots never come into existence.
This is a future where the enormous power and potential of AI is harnessed for the good of humanity, promoting equality, freedom and justice. It’s a future where open source AI allows innovators across the world to harness the power of technology, where explainable AI is developed and used, allowing for AI decisions to be interrogated and challenged, and with clear legal accountability systems to ensure that the rights and responsibilities of users and developers are clear.
In short, it’s a future where AI is a technology where human rights is a core design and use principle.
Amnesty International's Artificial Intelligence and human rights initiative
But this can only happen if we start out now with an objective to protect human dignity, equality and justice. AI is built by humans and it will be shaped by human values. If we build AI systems that are a mirror to our current societies, they will be riddled with the historical biases and inequalities of our societies.
But we can do better. When states signed the Universal Declaration of Human Rights in 1948, they were not simply reflecting the world they lived in – but an aspirational world. A world which would stand up for and protect every human being’s dignity. We must today challenge ourselves to be aspirational again as we prepare for a future world where AI and technology are integrated into every aspect of people’s lives.
Fortunately we already have the sustainable development goals (SDGs) – the 2030 goals which almost every country in the world has signed up to at the United Nations just two years ago. A core principle of the SDGs is the principle of leaving no one behind, so this is a principle which should guide us as we think about the discussion here today.
It is heartening to see the interest and dedication of many in the technology community, many of whom are here today, to ensure that the development of AI is ethical. We believe that such ethics must be based on human rights principles. These are universal principles that have developed and matured over nearly 70 years – and have been applied in national contexts across the world. Governments have binding human rights obligations and corporations have a responsibility to respect human rights. We strongly believe that enshrining AI ethics in human rights is the best way to make AI a positive force in our collective future.
Amnesty International exists to bring about a world where human rights and freedom are enjoyed by everyone everywhere. As the world’s oldest and largest people’s movement for human rights, with more than seven million activists standing up across the world for human rights, today, our work is as much about campaigning for the release of prisoners of conscience and the protection of the world’s 21 million refugees as it is about calling for a future where technology, including AI, promote freedom, dignity, equity and justice.
And so, I take this opportunity to announce Amnesty International’s artificial intelligence and human rights initiative and, building on the principles to be adopted by this conference, I invite you to collaborate over the coming months and years to enshrine the protection of human rights in the development and use of AI.
So going back to Dinka who I referred to at the beginning of my speech, let’s set our mind to think about what happens when Dinka grows up, what happen when our children grow up and our grandchildren grow up? How will they judge us on what we did today, what we decided today?
What is Amnesty International’s AI and human rights initiative?
The AI and human rights initiative aims to tackle current and future human rights challenges posed by AI technologies.
A core part of the initiative is the development of human rights principles for AI. There are various efforts to develop ethical principles for the development and use of AI and we believe such principles should enshrine existing human rights standards. Through our large network of human rights defenders and partner organisations, we aim to facilitate dialogue with diverse global civil society voice about the ethics of AI, in order to ensure that the development of ethical and human rights principles for AI is guided by global human rights perspectives. Amnesty is planning a series of events to facilitate these discussions over the coming months and years.
In addition, and building on our campaigning against the development of ‘killer robots’, we will also tackle the current use of AI in other fields through a body of new research, starting with the use of AI in policing. This research will inform our work on human rights principles for AI.
As well as tackling the human rights challenges posed by AI technologies, Amnesty International is also exploring ways in which AI can help solve global human rights challenges.