By Natasza Piasecka, Campaigner on Military, Security and Policing Issues at Amnesty International
It’s a week before Christmas and the mountains overlooking the UN’s Palais de Nations building in Geneva are covered in snow, making the wintry landscape look postcard perfect. In stark contrast to the tranquil location, diplomats inside are engaged in heated discussions about an issue with potentially devastating implications for human rights. At stake is how to control deadly technologies which could shape future warfare: autonomous weapons systems, also known as “killer robots”.
These weapons can be described as capable of selecting and killing targets without meaningful human control, using sensors and other technology. In what is developing into a new arms race, a number of states are funnelling substantial resources to create these technologies in the near future. On Friday, the diplomats gathered in Geneva at the Conference of the High Contracting Parties to the Convention on Conventional Weapons will make a crucial decision on whether to start negotiations on a new law to regulate this burgeoning field of hi-tech warfare.
Delegating life-or-death decisions to machines raises fundamental moral questions around human dignity and the type of society we want to live in, fatally undermining accountability and compliance with international humanitarian and human rights law.
The battlefield is an inherently chaotic environment; soldiers are subject to strict rules and codes of conduct in an attempt to lawfully respond to this and carry out the commander’s intent. With a virtually limitless number of possible scenarios, machines are not capable of making the necessary complex ethical decisions or real-time analysis of the rapidly developing events on the ground.
Imagine a situation in which a killer robot mistakes a target due to quickly changing circumstances or fails to recognise signs of surrender or injured combatants who are hors de combats, thereby ignoring key principles enshrined in international humanitarian law. Further, machines will be programmed differently depending on which country has developed them according to their technical specifications. What does that mean and why does that matter? They will have diversely calibrated settings within their machine’s training programmes which will then identify, analyse and respond to objects in the real world differently. It is also unclear how different types of killer robots would interact with each other.
It is already well documented that technologies like facial and other types of recognition systematically fail to recognize women, people of colour and persons with disabilities. Employing these technologies on the battlefield, in law enforcement or border control would be disastrous.
As we have seen from the use of military equipment in law enforcement, it is only a matter of time before this technology is used on the streets in cities around the world. In the era of cyber warfare and well-known threats posed when weapons go astray or are captured by armed groups, it’s not too farfetched to conceive the devastating risks to international security that such situations would present.
It is essential that we ensure human control over the use of force.
A handful of states, mostly those that have already invested heavily in the development of autonomous weapons systems, are attempting to block progress towards a mandate to negotiate a legal response to the risks they pose. The risks of these weapons make for dispiriting reading but there is reason for optimism. There is a sense of urgency among states and a recognition of the high expectations on the outcome of this week’s talks. A clear majority of states do support the need for a legally binding instrument and strong groupings have emerged and coalesced over agreement on specific elements of a potential future law. At the national level, there have been a number of positive recent developments. New Zealand has committed to ban killer robots. The Dutch government sought a legal opinion on the issue of autonomous weapons which recommended that the “government actively pursue the formulation of an explicit prohibition of the development and use of fully autonomous weapon systems”. The new government in Norway in its political platform has signalled that it will take “the necessary initiatives to regulate the development of autonomous weapons systems”.
Beyond the UN, the issue has galvanised strong opposition among activists, scientists, technologists and roboticists, legal scholars and others. The Stop Killer Robots global coalition of which Amnesty International is a founding member, now has more than 185 member organisations across over 67 countries.
Amnesty International and the Stop Killer Robots’ petition which launched only last month has already generated over 17,000 signatures from members of the public urging states to start negotiating a legally binding law on autonomous weapons and the numbers are still growing every day. On the opening day of the Review Conference, Amnesty and Stop Killer Robots handed over a placard representing petition signatures collected from people around the world who called on their governments to launch negotiations for a new international law that ensures human control in the use of force and to prohibit machines that target people.
We are calling for a legally binding international instrument that prohibits autonomous weapons systems that cannot be used with meaningful human control and banning all systems that use sensors to target humans. We are also calling for regulation of other systems that use autonomy in the critical functions of selecting and engaging targets. The world is watching and the time to act is now. States must break the deadlock by adopting a mandate to negotiate a legally binding instrument. A new treaty is the only mechanism strong enough to tackle the multitude of risks raised by killer robots. Political action needs to outpace these dangerous technological developments before it is too late.