Killer robots must be banned before they get loose on the battlefield, a rights group warned Monday, calling for an international treaty outlawing military weapons systems that decide – without a human “in the loop” – when to pull the trigger.
Human Rights Watch and Harvard Law School’s International Human Rights Clinic call for all states to agree to ban the “development, production and use of fully autonomous weapons.” They also want robot designers to enact a “code of conduct” to keep the genie of killing machines with artificial intelligence in the bottle.
Already futuristic and near-autonomous weapons are deployed and firing. Israel’s Iron Dome anti-missile defence system detects, tracks, targets and then – giving a human only a split second to approve or reject – shoots down incoming rockets. Similar systems protect U.S. warships – once armed, they don’t even ask permission to shoot. But their targets are incoming weapons not human combatants.
In South Korea, hulking robot sentries armed with rapid-fire guns and grenade launchers can detect an intruding human in the tense “demilitarized zone” up to two kilometres away. They too – for now – need a human triggerman, but only because the algorithm requires that the machine ask the man. Along the Israel-Gaza border, other armed robot sentries are also deployed. As yet, they still require human approval to open fire. “At least in the initial phases of deployment, we’re going to keep the man in the loop,” an Israeli commander told Defense News. Similarly, American Predator drones still need a human – albeit one far from the killing zone and often thousands of kilometres away – to launch the Hellfire missiles that routinely obliterate suspect jihadists.
But it’s the next stage, already on the drawing board and close to battlefield-ready, that Human Rights Watch wants banned. Those so-called killer robots, armed with lethal weapons and run by algorithms that – once unleashed – require no human “in the loop” should be banned before they are deployed, the group said.
“Action is needed now, before killer robots cross the line from science fiction to feasibility,” said Steve Goose, arms division director at Human Rights Watch.
In the first major public examination of the legal and ethical quandaries created by giving machines the ability to choose who lives and who dies on the battlefield, Human Rights Watch and the Harvard Law School International Human Rights Clinic call for an outright ban on fully autonomous weapons.
Such a ban would require a major new arms treaty. Killer robots would be the first class of weapons banned before first use. Other treaties, such as the one outlawing chemical weapons, enacted after the horrors of the First World War and the more recent – and limited – ban on land mines, came only after the weapons were used.
While ever-more sophisticated weapons technology seems inevitable, keeping a human “in the loop” is essential both for the decision to shoot and accountability for that decision, said Bonnie Docherty, lead author and primary researcher of the 50-page report.
“Fully autonomous weapons would never be able to comply with international humanitarian law and would undermine the protections for civilians,” she said in an interview. They would also, she added, “leave an accountability gap.” Who would face war-crimes charges if the robot gunned down innocent school children or killed surrendering combatants?
Some military analysts predict it will be decades before killer robots are sufficiently sophisticated to be battlefield ready. But other breakthrough technologies – such as stealth aircraft – were deployed before the Pentagon even acknowledged their existence.
However, some proponents of killer robots contend they can be safer than human combatants who suffer from stress and fear, and make mistakes under pressure.
John McGinnis, a Northwestern University Law professor, suggests that “artificial-intelligence robots on the battlefield may actually lead to less destruction, becoming a civilizing force in wars.”
While the report acknowledges that potential for decision-making by advanced artificial-intelligence weapons systems, it says “even if the development to fully autonomous weapons with human-like cognition becomes feasible, they would lack certain human qualities such as emotion, compassion and the ability to understand humans.”