Stopping the killer robots before its too late

Amnesty International is part of the global Campaign to Stop Killer Robots, whose aim is to curb the rise of lethal autonomous weapons systems © Oli Scarff/Getty Images
Amnesty International is part of the global Campaign to Stop Killer Robots, whose aim is to curb the rise of lethal autonomous weapons systems © Oli Scarff/Getty Images

Picture this: war breaks out in a far-off place, chaos and destruction abound, and countless civilians are killed and injured by Terminator-like automatons tasked with rooting out enemy combatants.

Elsewhere, peaceful protesters take to the streets to demand their rights but find themselves violently suppressed by Robocop-like police and weaponized robotic vehicles that track the protesters’ every move and shoot bullets and tear gas at them.

These scenarios might be the stuff of Hollywood blockbuster fantasy, but robots that can target, attack, kill and injure won’t be science fiction for much longer.

Killer robots – or lethal autonomous weapons systems (LAWS) as they are referred to by governments – are for the first time the subject of intense discussions by a majority of states here at the UN in Geneva at this week’s four-day Convention of Conventional Weapons (CCW) Experts Meeting, which wrapped up today.

Killer robots are fully autonomous weapons that can choose and fire at human targets on their own, without any human intervention. They are usually referred to as “lethal” because the designs are seen as military weapons to kill in armed conflict. Companies in the US, UK, Germany, Israel, UAE, Jordan and South Africa are also developing “less lethal” robotic weapons for policing that are remotely operated or fire automatically when touched. These can apparently shoot tear gas, rubber bullets and electric-shock darts. At the moment fully autonomous weapons don’t exist, but rapid advances in technology are bringing them closer to reality.

For example, UK-based BAE Systems has developed the prototype Taranis combat drone aircraft, which straddles the line between drones and robots. The Taranis is designed to fly intercontinental missions at supersonic speeds, undetected by radar, and almost completely free of human direction. The Taranis reportedly includes two weapons bays that could eventually carry bombs or missiles.

Also in existence is Atlas, a humanoid robot primarily developed by the American robotics company Boston Dynamics, with funding and oversight from the United States Defense Advanced Research Projects Agency (DARPA). According to

The development of tiny ‘nano’ robots, whether to be used individually or in swarms, further complicates an already complex technological landscape as these devices will likely become weaponized over time in somewhat of a legal vacuum.

From a human rights perspective, autonomous lethal and less lethal robotic weapons are extremely worrying for several reasons.

Firstly, Amnesty International believes that the use of weapons without meaningful and effective human control would increase the likelihood of unlawful killings and injuries, both on the battlefield and in policing operations.

Secondly, despite some governments and technology experts arguing that autonomous killer robots could be programmed to comply with international law, to many others this seems impossible. Amnesty International believes that in policing operations, LAWS without meaningful human control wouldn’t be able to properly assess complex policing situations and comply with relevant standards. Policing standards prohibit the use of firearms except in defence against an imminent threat of death or serious injury, and it’s very difficult to imagine a machine substituting human judgment, which is critically important to any decision to use lethal force.

Similarly, in armed conflict situations, we believe that robotic weapon systems wouldn’t be able to comply with the laws of war, including the rule requiring armed forces to distinguish between combatants and civilians, to take necessary precautions to minimize harm to civilians, and to evaluate the proportionality of an attack.

Finally, to allow robots to have power over life-and-death decisions crosses a fundamental moral line. More importantly for us, it also violates the human rights to life and dignity, as it can be said that the killing of humans by machines is ultimate indignity, and humans should not be reduced to mere objects.

It’s for this and other reasons that Amnesty International believes a new international law is needed to address this emerging weapons technology. We believe a total ban on the development, deployment and use of lethal autonomous weapon systems is the only real solution. Taking a ‘wait and see’ approach could lead to the further investment by states in the development of these weapons systems and their rapid proliferation in a new arms race.

Governments will meet again to discuss what to do about killer robots at the CCW Annual Meeting in November, and we hope a formal negotiation process will then begin to establish a new global ban. Amnesty International and our partner NGOs and experts in the Campaign to Stop Killer Robots will seek to work with governments to ensure we as a civilization don’t devolve into the Sci-Fi dystopia where killer robots run amok.

Read more:

Campaign to Stop Killer Robots