Skip to main content

Prominent artificial intelligence researchers and tech leaders, including Canadian deep-learning pioneer Yoshua Bengio and Tesla chief executive Elon Musk, are calling for a temporary pause on the rapid development of some AI systems, arguing the technology poses “profound risks to society and humanity.”

They and around 1,300 other people have signed an open letter proposing that AI labs immediately halt the training of systems that are more powerful than GPT-4, the latest iteration of a large language model created by OpenAI. The letter suggests the pause continue for at least six months, to give the industry time to create and implement shared safety protocols. “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the letter says.

Other signatories include Apple co-founder Steve Wozniak, Skype co-founder Jaan Tallinn and Emad Mostaque, the chief executive of Stability AI, which has created a popular text-to-image generator called Stable Diffusion. The letter was co-ordinated by the Future of Life Institute, a non-profit where Mr. Musk serves as an adviser.

Mr. Bengio, the founder and scientific director at Mila, a machine-learning institute in Montreal, said at a news conference Wednesday that AI has the potential to bring many benefits to society. “But also I’m concerned that powerful tools can have negative uses and that society is not ready to deal with that,” he said.

Generative AI, a term for technology that creates text and images based on a few words supplied by a user, has skyrocketed in popularity since OpenAI released a chatbot called ChatGPT in November. Venture capital firms have rushed to pump money into AI startups, while established tech giants – such as Microsoft, and Google parent company Alphabet – have scrambled to integrate generative AI features into their products.

The developments have astounded some. GPT-4, which was released earlier this month, can describe images, code a website based on nothing more than a napkin sketch and pass standardized tests. But some observers are deeply worried by the breakneck speed at which these systems are gaining sophistication.

Of particular concern to Mr. Bengio is the possibility that large language models, or LLMs, could be used to destabilize democracies. “We have tools that are essentially starting to master language,” he said. “We already have advertising and political advertising. But imagine that boosted with very powerful AI that can speak to you in a personalized way and influence you in ways that were not possible before.”

The letter cites other risks, including the potential for jobs across industries to be automated. And it notes that AI models are opaque and unpredictable. “Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?” the letter says. “Such decisions must not be delegated to unelected tech leaders.”

The proponents of the pause argue that industry safety standards not only need to be created and put in place, but audited and overseen by independent experts. The signatories are not calling for a pause on AI development in general, but “a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.” If the halt can’t be implemented quickly, the letter says, governments should issue a moratorium.

“Six months is not going to be enough for society to find all the solutions,” Mr. Bengio said. “But we have to start somewhere.”

In response to the letter, OpenAI CEO Sam Altman told the Wall Street Journal that the signatories are “preaching to the choir.” He said his company has always taken safety seriously. OpenAI, which is based in San Francisco, has not started training the next version of GPT-4.

Max Tegmark, an MIT physics professor and president of the Future of Life Institute, said at the news conference that while AI researchers and companies are rightly concerned about societal risk, they face immense pressure to release products quickly, to prevent themselves from falling behind the competition. “Our goal is to help … avoid this very destructive competition driven by commercial pressure, where it’s so hard for companies to resist doing reckless things,” he said. “They need help from the broader community because no company can slow down alone.”

Some researchers have criticized the open letter. Arvind Narayanan, a computer-science professor at Princeton University, wrote on Twitter that the letter exaggerates both the capabilities and the existential risks of generative AI. “There will be effects on labour and we should plan for that, but the idea that LLMs will soon replace professionals is nonsense,” he said.

Yann LeCun, the chief AI scientist at Meta, wrote on Twitter that he did not sign the letter and does not agree with its premise. But he did not elaborate.

“There’s wisdom in slowing down for a moment,” said Gillian Hadfield, a law professor at the University of Toronto and senior policy adviser to OpenAI. “The real challenge here is we don’t have any legal framework around this, or very, very minimal legal frameworks.” Ms. Hadfield would like to see a system in which companies developing large AI models have to register and obtain licences, in case harmful capabilities emerge. “If we require a licence, we can take away a licence,” she said.

Canada has its own OpenAI competitor in Toronto-based Cohere Inc., which develops language-processing technology that can be used to generate, analyze and summarize text. Cohere partnered with OpenAI last year on a set of best practices for deploying the technology, including steps to mitigate harmful behaviour and minimize bias.

Through a spokesperson, Cohere declined to comment.

Calls to take a breather on AI development have been escalating in recent weeks. In February, Conservative MP Michelle Rempel Garner co-authored a Substack post with Gary Marcus, a New York University emeritus psychology professor and entrepreneur in Vancouver who has emerged as a vocal critic of how generative technology is being rolled out. The two made the case for governments to consider hitting pause on the public release of potentially risky AI.

“New pharmaceuticals, for example, begin with small clinical trials and move to larger trials with greater numbers of people, but only once sufficient evidence has been produced for government regulators to believe they are safe,” they wrote. “Given that the new breed of AI systems have demonstrated the ability to manipulate humans, tech companies could be subjected to similar oversight.”

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe