Skip to main content
Open this photo in gallery:

Emeritus professor of computer science at the University of Toronto Geoffrey Hinton was one of more than 350 industry leaders, academics and engineers who endorsed a statement to call attention to the most extreme risk posed by AI.Fred Lum/The Globe and Mail

A group of top artificial intelligence researchers and executives have warned that reducing the “risk of extinction” from AI should be treated as seriously as pandemic preparedness and preventing nuclear war.

More than 350 industry leaders, academics and engineers – including Geoffrey Hinton, Yoshua Bengio, OpenAI chief executive officer Sam Altman and top executives of Google DeepMind – have endorsed a 22-word statement released Tuesday to call attention to the most extreme risk posed by AI, the very technology many of them are ushering into the world. (Canadian musician Grimes, who has embraced AI to make music, is also a signatory.)

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” reads the statement in its entirety, which was co-ordinated by the Center for AI Safety, a U.S. non-profit organization. An accompanying explanation notes that it can be difficult to express concerns about the most severe risks posed by AI and that the statement is intended to start a discussion.

Indeed, the notion that AI models will some day be powerful enough to outwit humans, run amok or otherwise wreak havoc on humanity can be a divisive one. Some experts believe the technology is still far too underdeveloped and that such scenarios are highly unlikely.

Additionally, focusing on existential threats may distract from more immediate problems posed by AI, such as bias, job losses, the potential to spread misinformation and a lack of transparency into the data used to train computer models.

Will AI take over the world? And other questions Canadians are asking Google about the technology

But for others who have devoted their lives to advancing AI, a deep worry is setting in. “It’s ceased to be seen as a kind of science fiction future,” Dr. Hinton told The Globe and Mail. “And it’s becoming seen as a quite plausible kind of a future.” The emeritus professor of computer science at the University of Toronto recently left his job at Google in order to more freely discuss his concerns about artificial intelligence, a field in which he has been one of the most influential figures.

Since then, Dr. Hinton has given interviews to media around the world about how quickly AI models, particularly those that can generate text and images, have advanced in recent months, warning of the potential threats they pose as they become more powerful. “There are no simple solutions that I can see,” he said. “With climate change, there’s fairly simple solutions. They’re unpalatable, like stop burning carbon, but they would work. With this, it’s not so obvious what would work. It’s not even certain that there is anything that would work.”

There is, however, a robust and continuing body of research into designing safe AI systems. Some of the signatories to the statement released Tuesday, including Mr. Altman, contend that the risks can be managed in order for society to reap the benefits of the technology, such as improved drug discovery and disease detection. “I don’t think it works to just say humanity should never build AI,” he told The Globe at an event in Toronto this month. OpenAI recently proposed creating a global safety organization for artificial intelligence, similar to the International Atomic Energy Agency.

The peril and promise of artificial intelligence

The statement is not the first of its kind. In March, the Future of Life Institute put out an open letter calling for a pause on the development of some powerful AI models to give researchers, regulators and policy makers time to come up with adequate guardrails, arguing the technology poses “profound risks to society and humanity.”

Some criticized that letter for missing the mark. The Distributed AI Research Institute (DAIR), a U.S. non-profit, published a response saying the “hypothetical risks” raised in the letter amounted to “fearmongering and AI hype.” Regulators should be more concerned with ensuring the transparency of AI models and limiting the concentration of power among a few tech giants, the DAIR researchers noted. (The organization was founded by Timnit Gebru, the former co-head of Google’s ethical AI research team.)

Emily Bender, a University of Washington linguistics professor who has written about the problems posed by AI language models for years, dismissed Tuesday’s statement from the Center for AI Safety. “We should be concerned by the real harms that [corporations] and the people who make them up are doing in the name of AI, not about Skynet,” she wrote on Twitter, referring to the murderous AI system from the Terminator movie franchise.

Prof. Bengio, the scientific director of the Mila AI institute in Quebec, who won the prestigious Turing Award in 2018 alongside Dr. Hinton for their work on neural networks, said he signed the statement to increase awareness about the “catastrophic risks” posed by more powerful AI models. “It takes a lot of time for society to adapt to things like this, both in terms of legislation, people understanding the issues and the debates that are necessary in a democracy,” said Prof. Bengio, who has been outspoken in recent months about his concerns.

Even among AI experts, some soul-searching is under way. “Speaking for myself, if you’ve been working in this field for decades, and you’ve been building an image of yourself that you’re doing something good for humanity … and then you see these issues, these concerns. How do you change your mind about that?” Prof. Bengio said. “Changing so much is something that takes time.”

Your Globe

Build your personal news feed

Follow the author of this article:

Follow topics related to this article:

Check Following for new articles