Skip to main content

Nick Bostrom: ‘I don’t think the artificial-intelligence train will slow down’

Nick Bostrom is a Swedish-born professor of philosophy at Oxford University and founding director of the Future of Humanity Institute at the Oxford Martin School. He is the author of the 2014 bestseller Superintelligence.

Oxford university

This piece is the continuation of a series in which Rudyard Griffiths, chair of the Munk Debates, Canada's leading forum for public debate, examines issues and trends on the horizon with top international thinkers and policy-makers.

Why do you think the development of super-intelligent machines could be an existential threat to humanity?

If you take a step back and think about the human condition, we can see that intelligence has played a shaping role. It's because of our slightly greater general intelligence that we humans now occupy our dominant position on planet Earth. And even though, say, our ape ancestors were physically stronger than us, the fate of the gorillas now rests in our hands because with our technology, developed through our intelligence, we have unprecedented powers. For basically the same reason, if we develop machines that radically exceed human intelligence, they too could be extremely powerful and able to shape the future according to their preferences. So the transition to the machine-intelligence era looks like a momentous event and one I think associated with significant existential risk.

Story continues below advertisement

How do we get from today's computers, with phenomenal brute computing power, to what you're talking about – a super intelligence that dwarfs our individual or even combined intellects?

When people were working on artificial intelligence back in the 1960s and 1970s, basically it was programmers putting commands in a box. You would hard-code a lot into the computer and the resulting knowledge system was brittle; it didn't scale. AI research today is focused much more on machine learning. AI developers are trying to craft algorithms that enable the machine to learn from raw perceptual data and to infer from this data conceptual representations. Say, distinguishing facial features in ways similar to how human infants learn. The program starts out not really knowing anything and, just from taking data in from its eyeballs and ears, it eventually builds up a rich representation of the world around it. The overarching idea driving artificial-intelligence researchers is how to capture the same powerful learning and planning algorithms that produce general intelligence in the human.

The point is, however long it takes to get from where research is now to a sort of human-level general intelligence, the step from human-level general intelligence to super intelligence will be rapid. I don't think that the artificial-intelligence train will slow down or stop at the human-ability station. Rather, it's likely to swoosh right by it. If I am correct, it means that we might go, in a relatively brief period of time, from something slightly subhuman to something radically super intelligent. And whether that will happen in a few decades or many decades from now, we should prepare for this transition to be quite rapid, if and when it does occur. I think that kind of rapid-transition scenario does involve some very particular types of existential risk.

Why is this super intelligence more likely to be a threat to humanity? Why couldn't it just as likely help us solve some of our greatest problems?

I certainly hope that it will help us solve our problems, and I think that that might be a likely outcome, particularly if we put in the hard work now to solve how to "control" artificial intelligences. But, say one day we create a super intelligence and we ask it to make as many paper clips as possible. Maybe we built it to run our paper-clip factory.

If we were to think through what it would actually mean to configure the universe in a way that maximizes the number of paper clips that exist, you realize that such an AI would have incentives, instrumental reasons, to harm humans. Maybe it would want to get rid of humans, so we don't switch it off, because then there would be fewer paper clips. Human bodies consist of a lot of atoms and they can be used to build more paper clips.

If you plug into a super-intelligent machine with almost any goal you can imagine, most would be inconsistent with the survival and flourishing of the human civilization.

Story continues below advertisement

Can we manage this risk the same way we did with nuclear weapons? In short, put in place international controls?

I think it is a misleading analogy. The obvious difference is that nuclear weapons require rare and difficult-to-obtain raw materials. You need highly enriched uranium or plutonium.

Artificial intelligence fundamentally is software. Once computers are powerful enough and once somebody figures out how to write the relevant code, anybody and his brother could create an AI in the garage.

A more fundamental difference is that a nuclear bomb is very dangerous, but it's inert. The nuclear bomb doesn't sit there and try to figure out some way in which it could explode itself or defeat our safeguards. With an AI there is the potential in a worst-case scenario to face off against an adversarial intelligence that is smarter than you, that would be trying to anticipate your countermeasures and get around them. And if that AI really is super intelligent, the conservative assumption would be that it would eventually succeed in outwitting us.

This is why we have to ensure that the AI's motivation systems are engineered in such a way that they share our values. In many ways, it's a one-of-a-kind challenge for humanity to develop intelligences that will exceed us in the sense that human intelligence is redundant, and yet do it in such a way that human values get carried forward and shape this machine-intelligence future.

This interview has been edited and condensed.

Report an error Editorial code of conduct
Comments

Welcome to The Globe and Mail’s comment community. This is a space where subscribers can engage with each other and Globe staff. Non-subscribers can read and sort comments but will not be able to engage with them in any way. Click here to subscribe.

If you would like to write a letter to the editor, please forward it to letters@globeandmail.com. Readers can also interact with The Globe on Facebook and Twitter .

Welcome to The Globe and Mail’s comment community. This is a space where subscribers can engage with each other and Globe staff. Non-subscribers can read and sort comments but will not be able to engage with them in any way. Click here to subscribe.

If you would like to write a letter to the editor, please forward it to letters@globeandmail.com. Readers can also interact with The Globe on Facebook and Twitter .

Welcome to The Globe and Mail’s comment community. This is a space where subscribers can engage with each other and Globe staff.

We aim to create a safe and valuable space for discussion and debate. That means:

  • All comments will be reviewed by one or more moderators before being posted to the site. This should only take a few moments.
  • Treat others as you wish to be treated
  • Criticize ideas, not people
  • Stay on topic
  • Avoid the use of toxic and offensive language
  • Flag bad behaviour

Comments that violate our community guidelines will be removed. Commenters who repeatedly violate community guidelines may be suspended, causing them to temporarily lose their ability to engage with comments.

Read our community guidelines here

Discussion loading ...

Due to technical reasons, we have temporarily removed commenting from our articles. We hope to have this fixed soon. Thank you for your patience. If you are looking to give feedback on our new site, please send it along to feedback@globeandmail.com. If you want to write a letter to the editor, please forward to letters@globeandmail.com.
Cannabis pro newsletter