Skip to main content

British theoretical physicist professor Stephen Hawking speaks to members of the media at a press conference in London on December 2, 2014.JUSTIN TALLIS/AFP / Getty Images

Stephen Hawking is the world's most famous physicist, and even he's worried about being outsmarted. Not by a better mathematician, but by a machine.

He laid out his concern in London on Tuesday, at the same moment he was demonstrating a new "intelligent" language system – designed by Intel and British keyboard and predictive text startup SwiftKey – that updates the 20-year-old voice synthesizer he had been using. The author of A Brief History of Time lost his voice in 1985, in part due to his paralyzing degenerative medical condition known as amyotrophic lateral sclerosis. The scientist's famously monotone computerized voice had slowed to about one word a minute as his condition worsened and he lost more motor control over his body. The new system promises to double his speaking and writing speed thanks to predictive text algorithms.

"By making this technology freely available, it has the potential to greatly improve the lives of disabled people all over the world," Mr. Hawking said.

So, it might seem a strange moment for him to point out that while limited forms of artificial intelligence and machine learning have turned out to be very useful, going any further risks a dystopian future and possibly the end of human life as we know it.

In an interview with the BBC, he said: "I think the development of full artificial intelligence could spell the end of the human race. Once humans develop artificial intelligence, it would take off on its own and redesign itself at an ever increasing rate."

Speaking with the Financial Times of London, he said he worried that artificial intelligence could "outsmart us all."

"According to Moore's Law, computers double their speed and memory capacity every 18 months. The risk is that computers develop intelligence and take over. Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded."

Mr. Hawking is far from the first person to raise some doubts about the wisdom of artificially intelligent machines. The legend of the Golem, a clay automaton brought to artificial life, is an enduring archetypal threat in Western culture expressed in everything from Mary Shelley's Frankenstein to Stanley Kubrick's villainous computer HAL in 2001: A Space Odyssey.

There is a popular strain in Internet culture of robot appreciation and fear. News of the latest Google robotics purchase or drone delivery system is met with hundreds of wags tweeting a "welcome to our new robot overlords" (there is a popular tag on Tumblr that simply says "Robot Overlords").

"Our fears in technology ethics – and especially in robotics – seem to be a reflection about fears about ourselves," says Patrick Lin, the director of the Ethics and Emerging Sciences Group at California Polytechnic State University. Prof. Lin edited a collection of essays on our relationship with robots (helpfully called Robot Ethics: The Ethical and Social Implications of Robotics) and points out that we draw on our previous experience with "self-replicating, intelligent autonomous things" when we imagine the unpredictability of new forms of intelligent life. "We already have a proof of concept from nature: ourselves. Humans create children every day, some of whom turn out bad – the worst things this world has ever seen."

Even billionaire technologist Elon Musk, founder of the Tesla electric car company and the SpaceX rocket launch company among others, issued a warning about artificial intelligence in October at MIT's AeroAstro Centennial Symposium.

"I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it's probably that. So we need to be very careful," Mr. Musk says. "I'm increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don't do something very foolish."

At least on this front, Prof. Lin agrees. "Technology gives us superpowers. … History proves time and time again that people tend to abuse new powers, such as drones.

"If the stakes are high enough, we may be justified in adopting precautionary measures ahead of the technology, or at least consider the possible scenarios seriously to better evaluate the risks."

This isn't the first time Mr. Hawking has expressed fear over a potential existential threat to humanity. In 2010, he told the Times of London: "If aliens ever visit us, I think the outcome would be much as when Christopher Columbus first landed in America, which didn't turn out very well for the native Americans."

In Stephen Hawking's Universe, a Discovery Channel documentary, he speculated about the capabilities of a visiting race of extraterrestrials. He suggested they might be able to harness the power of a star to open wormholes for interstellar travel, and postulated some very dark motives for wandering the starways: "I imagine they might exist in massive ships, having used up all the resources from their home planet. Such advanced aliens would perhaps become nomads, looking to conquer and colonize whatever planets they can reach."

Compared with that scenario, we might be better off with the robot overlords.

Note to readers: This is a corrected story. An earlier version incorrectly titled Stephen Hawking's book A Brief Moment in Time

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe