Skip to main content
Open this photo in gallery:

Yoshua Bengio appeared before a Senate subcommittee on privacy, technology and the law, which is investigating the creation of an AI oversight body.Christinne Muschi/The Canadian Press

Canadian artificial intelligence researcher Yoshua Bengio told a U.S. Senate subcommittee Tuesday that AI systems capable of human-level intelligence could be a few years away and pose potentially catastrophic risks, as governments around the world debate how to control a technology that is alarming some of its earliest developers.

Prof. Bengio appeared before a Senate subcommittee on privacy, technology and the law, which is investigating the creation of an AI oversight body. “There is a significant probability that superhuman AI is just a few years away, outpacing our ability to comprehend the various risks and establish sufficient guardrails, particularly against the more catastrophic scenarios,” he said in his written testimony. “The current gold rush into generative AI might, in fact, accelerate these advances in capabilities.”

AI pioneer Yoshua Bengio says regulation in Canada is too slow, warns of ‘existential’ threats

The sophistication of generative AI applications such as OpenAI’s ChatGPT, which can write and analyze language, have surprised even industry researchers such as Prof. Bengio, leading some to revise their timelines for the arrival of applications that are as smart as – or smarter than – humans. Experts previously thought human-level AI could be decades, or even centuries, away, Prof. Bengio said.

Some of the necessary components of this intelligence are still missing, such as the ability to reason. “Yet my own work in this space leads me to believe that AI researchers could be close to a breakthrough on these missing pieces,” Prof. Bengio wrote, who is the scientific director of the Mila – Quebec AI Institute and a professor at the Université de Montréal. He is also one of three scientists to win the prestigious Turing Award in 2018 for his contributions to the field, along with Geoffrey Hinton and Yann LeCun.

Sophisticated AI systems lower the barrier to entry for bad actors to create bioweapons, chemical weapons and malware, according to Prof. Bengio, and could become even more dangerous with less human oversight.

AI systems can also be misaligned and cause unintended consequences along the way to achieving a programmed objective. In a few years, it is possible that a “loss of control” scenario could emerge, where an AI system concludes that it must avoid being shut off in order to achieve its goal, and “conflict may ensue” if someone intervenes.

Zainab Choudhry: AI tools like ChatGPT are built on mass copyright infringement

Governments and regulators need to focus on a few broad areas in order to mitigate major risks, including limiting who can access powerful AI systems and banning applications that are not convincingly safe, he told senators. Governments should also consider monitoring and possibly restricting the amount of computing power and data sources used to train and operate AI models.

Research on countermeasures to protect society from potentially rogue AI systems will be necessary, too.

“No regulation is perfect,” he said. “We have a moral responsibility to mobilize our greatest minds and make major investments in a bold and internationally coordinated effort to fully reap the economic and social benefits of AI while protecting society.”

Recently, Prof. Bengio has emerged as a vocal advocate for new laws governing AI and was one of the prime signatories on an open letter in April urging the federal government to pass Bill C-27, which contains the Artificial Intelligence and Data Act (AIDA), a framework for regulating the technology.

Prof. Bengio highlighted AIDA in his written remarks on Tuesday, suggesting it could be a model for other countries to follow. Still, some experts have criticized AIDA, arguing it does not create a truly independent enforcement body and that there has been a lack of public consultation. Others have faulted AIDA for its dearth of specifics, including ways to protect artists and other creatives from having their work used to train AI models without consent.

In the U.S., the White House has issued guidance to AI companies. Last week, seven tech firms including OpenAI, Microsoft, Google and Amazon agreed to “voluntary commitments” such as security testing of AI systems before release and developing technical measures so the public will know when pictures and other media are machine-generated.

Will AI take over the world? And other questions Canadians are asking Google about the technology

Lawmakers in the European Union, meanwhile, agreed to a draft version of the EU AI Act in June, which regulates applications based on the level of risk. Some uses of AI, such as facial recognition in public spaces, are banned outright while “high-risk” systems that could negatively affect safety and human rights will have to be registered. Makers of generative AI applications like ChatGPT will have to publish summaries of copyrighted data used in training.

Other AI researchers have argued that the extreme scenarios outlined by Prof. Bengio – and the attention given to them – may be doing more harm than good. Events that seem borrowed from science fiction could distract from more realistic problems, such as misinformation and job losses, and mislead the public about the capabilities of AI, thereby contributing to a hype cycle.

OpenAI chief executive Sam Altman has also said that AI is one of the few technologies that poses an existential threat to humanity. Some experts are concerned that by both exaggerating the threats and advocating for certain policy solutions, companies such as OpenAI could be engaging in a form of regulatory capture.

“The possible scenarios that people have dreamed up for AI existential threats are all based on unfounded speculation rather than science or empirical evidence,” said Sante Fe Institute professor Melanie Mitchell at a Munk Debate in Toronto in June, where she sparred onstage with Prof. Bengio about the dangers of AI.

“Such sensationalist claims deflect attention from real, immediate risks, and, further, might result in blocking the potential benefits that we can reap from potential progress.”

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe