Skip to main content
The Globe and Mail
Support Quality Journalism
The Globe and Mail
First Access to Latest
Investment News
Collection of curated
e-books and guides
Inform your decisions via
Globe Investor Tools
per week
for first 24 weeks

Enjoy unlimited digital access
Enjoy Unlimited Digital Access
Get full access to
Just $1.99 per week for the first 24 weeks
Just $1.99 per week for the first 24 weeks
var select={root:".js-sub-pencil",control:".js-sub-pencil-control",open:"o-sub-pencil--open",closed:"o-sub-pencil--closed"},dom={},allowExpand=!0;function pencilInit(o){var e=arguments.length>1&&void 0!==arguments[1]&&arguments[1];select.root=o,dom.root=document.querySelector(select.root),dom.root&&(dom.control=document.querySelector(select.control),dom.control.addEventListener("click",onToggleClicked),setPanelState(e),window.addEventListener("scroll",onWindowScroll),dom.root.removeAttribute("hidden"))}function isPanelOpen(){return dom.root.classList.contains(}function setPanelState(o){dom.root.classList[o?"add":"remove"](,dom.root.classList[o?"remove":"add"](select.closed),dom.control.setAttribute("aria-expanded",o)}function onToggleClicked(){var l=!isPanelOpen();setPanelState(l)}function onWindowScroll(){window.requestAnimationFrame(function() {var l=isPanelOpen(),n=0===(document.body.scrollTop||document.documentElement.scrollTop);n||l||!allowExpand?n&&l&&(allowExpand=!0,setPanelState(!1)):(allowExpand=!1,setPanelState(!0))});}pencilInit(".js-sub-pencil",!1); // via darwin-bg var slideIndex = 0; carousel(); function carousel() { var i; var x = document.getElementsByClassName("subs_valueprop"); for (i = 0; i < x.length; i++) { x[i].style.display = "none"; } slideIndex++; if (slideIndex> x.length) { slideIndex = 1; } x[slideIndex - 1].style.display = "block"; setTimeout(carousel, 2500); }

Nick Bostrom is a Swedish-born professor of philosophy at Oxford University and founding director of the Future of Humanity Institute at the Oxford Martin School. He is the author of the 2014 bestseller Superintelligence.

Oxford university

This piece is the continuation of a series in which Rudyard Griffiths, chair of the Munk Debates, Canada's leading forum for public debate, examines issues and trends on the horizon with top international thinkers and policy-makers.

Why do you think the development of super-intelligent machines could be an existential threat to humanity?

If you take a step back and think about the human condition, we can see that intelligence has played a shaping role. It's because of our slightly greater general intelligence that we humans now occupy our dominant position on planet Earth. And even though, say, our ape ancestors were physically stronger than us, the fate of the gorillas now rests in our hands because with our technology, developed through our intelligence, we have unprecedented powers. For basically the same reason, if we develop machines that radically exceed human intelligence, they too could be extremely powerful and able to shape the future according to their preferences. So the transition to the machine-intelligence era looks like a momentous event and one I think associated with significant existential risk.

Story continues below advertisement

How do we get from today's computers, with phenomenal brute computing power, to what you're talking about – a super intelligence that dwarfs our individual or even combined intellects?

When people were working on artificial intelligence back in the 1960s and 1970s, basically it was programmers putting commands in a box. You would hard-code a lot into the computer and the resulting knowledge system was brittle; it didn't scale. AI research today is focused much more on machine learning. AI developers are trying to craft algorithms that enable the machine to learn from raw perceptual data and to infer from this data conceptual representations. Say, distinguishing facial features in ways similar to how human infants learn. The program starts out not really knowing anything and, just from taking data in from its eyeballs and ears, it eventually builds up a rich representation of the world around it. The overarching idea driving artificial-intelligence researchers is how to capture the same powerful learning and planning algorithms that produce general intelligence in the human.

The point is, however long it takes to get from where research is now to a sort of human-level general intelligence, the step from human-level general intelligence to super intelligence will be rapid. I don't think that the artificial-intelligence train will slow down or stop at the human-ability station. Rather, it's likely to swoosh right by it. If I am correct, it means that we might go, in a relatively brief period of time, from something slightly subhuman to something radically super intelligent. And whether that will happen in a few decades or many decades from now, we should prepare for this transition to be quite rapid, if and when it does occur. I think that kind of rapid-transition scenario does involve some very particular types of existential risk.

Why is this super intelligence more likely to be a threat to humanity? Why couldn't it just as likely help us solve some of our greatest problems?

I certainly hope that it will help us solve our problems, and I think that that might be a likely outcome, particularly if we put in the hard work now to solve how to "control" artificial intelligences. But, say one day we create a super intelligence and we ask it to make as many paper clips as possible. Maybe we built it to run our paper-clip factory.

If we were to think through what it would actually mean to configure the universe in a way that maximizes the number of paper clips that exist, you realize that such an AI would have incentives, instrumental reasons, to harm humans. Maybe it would want to get rid of humans, so we don't switch it off, because then there would be fewer paper clips. Human bodies consist of a lot of atoms and they can be used to build more paper clips.

If you plug into a super-intelligent machine with almost any goal you can imagine, most would be inconsistent with the survival and flourishing of the human civilization.

Story continues below advertisement

Can we manage this risk the same way we did with nuclear weapons? In short, put in place international controls?

I think it is a misleading analogy. The obvious difference is that nuclear weapons require rare and difficult-to-obtain raw materials. You need highly enriched uranium or plutonium.

Artificial intelligence fundamentally is software. Once computers are powerful enough and once somebody figures out how to write the relevant code, anybody and his brother could create an AI in the garage.

A more fundamental difference is that a nuclear bomb is very dangerous, but it's inert. The nuclear bomb doesn't sit there and try to figure out some way in which it could explode itself or defeat our safeguards. With an AI there is the potential in a worst-case scenario to face off against an adversarial intelligence that is smarter than you, that would be trying to anticipate your countermeasures and get around them. And if that AI really is super intelligent, the conservative assumption would be that it would eventually succeed in outwitting us.

This is why we have to ensure that the AI's motivation systems are engineered in such a way that they share our values. In many ways, it's a one-of-a-kind challenge for humanity to develop intelligences that will exceed us in the sense that human intelligence is redundant, and yet do it in such a way that human values get carried forward and shape this machine-intelligence future.

This interview has been edited and condensed.

Report an error Editorial code of conduct
Due to technical reasons, we have temporarily removed commenting from our articles. We hope to have this fixed soon. Thank you for your patience. If you are looking to give feedback on our new site, please send it along to If you want to write a letter to the editor, please forward to

Welcome to The Globe and Mail’s comment community. This is a space where subscribers can engage with each other and Globe staff. Non-subscribers can read and sort comments but will not be able to engage with them in any way. Click here to subscribe.

If you would like to write a letter to the editor, please forward it to Readers can also interact with The Globe on Facebook and Twitter .

Welcome to The Globe and Mail’s comment community. This is a space where subscribers can engage with each other and Globe staff. Non-subscribers can read and sort comments but will not be able to engage with them in any way. Click here to subscribe.

If you would like to write a letter to the editor, please forward it to Readers can also interact with The Globe on Facebook and Twitter .

Welcome to The Globe and Mail’s comment community. This is a space where subscribers can engage with each other and Globe staff.

We aim to create a safe and valuable space for discussion and debate. That means:

  • Treat others as you wish to be treated
  • Criticize ideas, not people
  • Stay on topic
  • Avoid the use of toxic and offensive language
  • Flag bad behaviour

Comments that violate our community guidelines will be removed.

Read our community guidelines here

Discussion loading ...

To view this site properly, enable cookies in your browser. Read our privacy policy to learn more.
How to enable cookies