Skip to main content

A sentry robot freezes a hypothetical intruder by pointing its machine gun during its test in Cheonan, south of Seoul, 2006.AFP / Getty Images

If you are looking for signs of the coming robot apocalypse, look no further than the demilitarized zone between North and South Korea. Here, along the most fortified border in the world, more than just human soldiers are keeping watch.

Meet the SGR-A1, a sentry robot deployed by South Korea and developed by a subsidiary of Samsung. The 1.2-metre-tall machine looks like little more than a swivelling pair of black metal shoeboxes, but its soulless, infrared-equipped eyes can spot an enemy up to four kilometres away. Its machine gun stands at the ready.

For now, the robot alerts its human masters, sitting in a control room nearby, of any suspected intruders. And it needs the okay from a human being to open fire. But this robot also has the capacity to operate in automated mode and shoot first, completely on its own. Resistance, my fellow humans, is futile.

The Pentagon and militaries around the world are developing increasingly autonomous weapons that go far beyond the controversial remote-control drones Washington has used in Pakistan, Afghanistan and elsewhere. Technological advances will allow these new weapons to select and fire on targets all by themselves, without any human approval. Some predict they could one day fight side by side with human soldiers.

The notion of robots that kill, not surprisingly, makes some humans nervous. This week, an international coalition of roboticists, academics and human-rights activists called the Campaign to Stop Killer Robots addressed diplomats at the United Nations in Geneva. The group wants the development, construction and use of "lethal autonomous weapons" banned, even though it acknowledges that such weapons, while technologically feasible, have never been deployed.

"There is a question to be asked here about whether allowing machines to kill people diminishes the value of human life for everybody, even if you are not being killed by a robot," says Peter Asaro, an assistant professor in the School of Media Studies at the New School in New York and a spokesman for the Campaign to Stop Killer Robots.

It's just one issue that philosophers, legal scholars and scientists have been wrestling with as the world braces for an onslaught of new robots that, we are told, are destined to mow our lawns, care for the elderly, teach autistic children, and even drive our cars. These robots may not be designed to kill, but they are going to force governments and courts to deal with a tangle of legal and ethical questions: Whom do I sue when I get hit by a driverless car? What if a medical robot gives a patient the wrong drug? What if my vacuum robot sucks up my hair while I am napping on the floor (as actually happened to a woman in South Korea recently)? And can a robot commit a war crime?

It is this last question that most preoccupies Prof. Asaro. He and other members of the coalition, which includes Human Rights Watch, argue that if an autonomous computer system, programmed to discern enemy soldiers from non-combatants, actually pulls the trigger without "meaningful human control," it becomes very difficult to hold anyone to account should something go wrong.

After a robot goes haywire and slaughters an entire village, you cannot haul it, an inanimate object, before the International Criminal Court, or subject it to a court-martial. And some experts say you may not even be able to legally blame its human masters or designers, provided they never intended for the autonomous robot to go berserk.

In criminal law, courts need to find what is known as mens rea, Latin for a knowingly guilty mind. But they would have trouble finding that any robot – with the current and foreseeable state of artificial intelligence – had any mind at all. Its designers or operators might face civil liability, if they were found to be responsible for a software glitch that was to blame, for example. But even they may be too far removed from the actions of a killer robot to be held accountable, some warn.

Those killer robots could look like anything from massive autonomous drones to Mars Rover-type robots mounted with machine guns. In a 2012 directive, the U.S. Department of Defence said that autonomous systems "shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force." In an e-mail response to a question from The Globe and Mail, the Canadian Department of National Defence said it is "not currently developing any lethal fully autonomous weapons systems" but that its research agency has an "active program of research in unmanned systems."

At least one robotics company, Waterloo, Ont.-based Clearpath Robotics Inc., has come out against killer robots, despite building robots and labs for the Canadian and U.S. militaries. Ryan Gariepy, a co-founder of Clearpath and its chief technology officer, said mine-clearing or surveillance were good uses for autonomous military robots – but not killing, particularly if such machines are just on one side of the battle: "Is it okay for robots to effectively issue death sentences? … Is it a human right to have the other side of the conflict put up the lives of their soldiers as well?"

Inevitably, the killer-robot discussion gets around to science-fiction writer Isaac Asimov's "three laws of robotics," featured in his short stories. The first law bans robots from harming humans; the second orders them to obey humans unless that means violating the first law; the third orders robots to protect their own existence, provided doing so doesn't violate the first two laws. But most experts say those laws are of little use in the real world: Their shortcomings, after all, provided Mr. Asimov with the twists in his plots.

Still, Georgia Institute of Technology professor Ronald Arkin, a prominent U.S. roboticist, who works on Pentagon projects, argues that killer robots or other automated weapons systems could be programmed to follow the laws of war – and follow them better than humans. A robot, he says, would never need to fire in defence of its life, or out of fear. It would have access to information and data that no human soldier could process as quickly, making it less likely to make a mistake in the "fog of war." It would never intentionally kill civilians in retaliation for the death of a comrade. And it could actually keep an eye out for human soldiers who commit atrocities.

Prof. Arkin also argues that autonomous technology is, effectively, already on the battlefield. U.S. Patriot missile batteries, he says, automatically select targets, giving a human supervisor as little as nine seconds to override them and tell them to stop. And the U.S. Navy's Phalanx system protects ships by automatically firing on incoming missiles.

But even he calls for a moratorium on the deployment of more autonomous weapons until it can be shown that they can reduce civilian casualties. "Absolutely nobody wants a Terminator. Who would want a system that you just send out, and it's capable of figuring out who it should kill at random?" Prof. Arkin says. "… These kinds of systems need to be designed carefully and cautiously, and I think there are ways to do that."

In recent months, big names in the technology world have sounded ominous alarms about artificial intelligence. Physicist Stephen Hawking warned that AI "could spell the end of the human race." Tesla Motors founder Elon Musk has called AI "an existential threat" as dangerous as nuclear weapons. Microsoft founder Bill Gates is also concerned.

The people who actually work on artificial intelligence say there is little to worry about. Yoshua Bengio, a leading researcher in AI at the University of Montreal, says that Star Wars and Star Trek fantasies prompt many to overestimate the abilities of robots and artificial intelligence. One day, he says, we might build machines that could rival human brains. But for now, AI is just not that intelligent.

"There's a lot to do before robotics becomes what you see in the movies," Prof. Bengio says. "To get the level of knowledge of understanding about the world that we expect of, say, a five-year-old, or even an adult – we are very far from [that]. And it might take decades or more. … Even what a mouse does is not something that we are able to replicate."

There is little question, however, that developments in both robot hardware and software are racing ahead.

Robots being developed by Boston Dynamics, which was recently acquired by Google, can scamper over rough terrain the way dogs do, and crawl like rats over rubble in a disaster zone. An elephant-like robot can lift a concrete brick with its trunk and hurl it over its shoulder. Another dog-like metal creature, named Spot, can keep its balance after being kicked in the side, its metal legs scrambling in an eerily lifelike way.

But if you fear that the killer robot revolution is imminent, Atlas, a six-foot-tall, 330-pound humanoid, might put you at ease. Also created by Boston Dynamics, it can do stairs and even walk over uneven piles of rubble, gingerly. It looks like one of the Cybermen from Doctor Who, but designed by Apple. But its battery pack lasts only an hour. We won't be kneeling before our Atlas masters any time soon.

It's more likely, says Ian Kerr, who holds the Canada Research Chair in Ethics, Law and Technology at the University of Ottawa, that humans will gradually relinquish more control to artificially intelligent computers – computers that will increasingly do certain things better than we do. Even though it may be a while before we have fully driverless cars on our roads, high-end vehicles are already parallel parking themselves. International Business Machines Corp.'s Watson computer, which won Jeopardy! in 2011, is now being used to sift through millions of pages of medical studies to recommend treatments for cancer patients.

"There will be a point where the human … is kind of in what I like to think of as the same position as Abraham on the mountain, where he hears the voice of God but still has to decide for himself what to do," Prof. Kerr says. "So then it is very easy to think, from a liability perspective, that the hospitals will want [doctors] to rely on the machines, because the machines have a better track record. … And all of a sudden, we find ourselves where we have taken humanity out of certain kinds of decision-making, such as driving, or war."

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe