A global regulatory framework is needed to deal with the potential harms of artificial intelligence, OpenAI chief executive Sam Altman said during a talk in Toronto on Monday.
“At the absolute frontier, where we’re creating systems that we don’t yet know exactly what the emerging capabilities will be, I think there should be some sort of global licensing and regulatory framework, in the same way we do for other super-dangerous, super-high-potential technology,” Mr. Altman said.
He added that under this framework AI models would have to pass certain safety thresholds in order to be released into the world. “It doesn’t slow down innovation, and it helps protect us from the most serious downside cases,” he said.
Mr. Altman made his remarks on stage at Toronto’s Design Exchange, in conversation with Shopify founder and chief executive Tobi Lutke. His appearance was part of a multicity, five-week world tour during which he is meeting with software developers and users of OpenAI’s products – including its most famous product, the AI chatbot ChatGPT.
The tour also includes stops in London, Paris, Dubai and Tokyo, among other places. The Toronto event was hosted by Elevate, a non-profit that organizes an annual tech and innovation summit.
Mr. Altman was a chipper pitchman for AI. He argued that we’re “living through one of the most important periods of human discovery,” but acknowledged that there are substantial risks. “I really do think that this is one of the few legitimately existential risk technologies that humanity faces. If we get this wrong, it’s very, very bad,” he said.
The goodwill tour of sorts is happening at a time of intense scrutiny of AI from lawmakers and regulators, as governments consider how best to mitigate the potential harms of the technology while still capturing the benefits. In recent months, experts and ethicists have raised concerns about AI-powered misinformation, biased AI-based decision-making, job losses as the technology gets better at performing various skills, and even existential threats it may pose to humanity. Some have called for a pause on AI development.
Mr. Altman will appear before a U.S. Senate panel Tuesday to discuss AI. Earlier this month, he met with U.S. President Joe Biden and tech executives as part of the White House’s efforts to get a handle on AI.
Canada has its own AI legislation in the works. The federal government introduced the Artificial Intelligence and Data Act last year as part of Bill C-27. It would establish a framework for regulating AI, but would leave the precise details to be written later.
In the meantime, AI is rapidly advancing. More powerful models will likely be released before the kind of regulatory framework proposed by Mr. Altman could ever be implemented. “It definitely seems ambitious, but I think it’s important to just put out the ambitious ideas,” he told The Globe and Mail in a brief interview after the event. (Others, including Conservative MP Michelle Rempel Garner, have made similar proposals.)
Mr. Altman said there is no timeline for the release of GPT-5, the next iteration of OpenAI’s language model. Earlier this year, the company released GPT-4, which is capable of generating and interpreting text and computer code, among many other tasks. “We spent a very long time with GPT-4 to ensure its safety,” he told The Globe. “After we do eventually train GPT-5, we again expect to put in a tremendous amount of work. I think it’s very reasonable to hold us to a high standard.”
While generative AI – which refers to systems that can produce text, images and other media – has been in development for years, Mr. Altman’s company set off an arms race among tech giants with the release of ChatGPT late last year. The chatbot surged to 100 million users within a couple of months, helped convince Microsoft to pour billions into OpenAI and build generative AI into its own products, and forced Google to catch up. Last week, Google announced a slew of generative AI features for its applications, such as Gmail. The company also debuted a new search engine powered by AI.
Mr. Altman is now one of the most influential figures in the field. He co-founded OpenAI as a non-profit in 2015 alongside Elon Musk, who parted ways with the enterprise three years later. Mr. Musk blasted OpenAI earlier this year for straying from its non-profit roots, calling it a “closed-source, maximum-profit company effectively controlled by Microsoft.” (He is now working on starting his own AI venture.)
While Mr. Altman has said AI carries risk, he has also said he believes the downsides can be managed and that the technology will ultimately benefit humanity, in part by increasing the pace of scientific breakthroughs.
“We have lost our collective sense of optimism about the future,” he said during the talk on Monday. “The only way that I know to return to that sense of optimism and that sense of growth is to use technology to create abundance.” Artificial intelligence, he added, can play a big role in achieving that goal.
Other AI experts have raised concerns that AI could increase the power and influence of big tech companies, exacerbate inequality and displace workers. Mr. Altman said that he has “some sympathy” for the view that AI will lead to job losses. But he has a more optimistic take. “What I’m pretty sure will happen is this will just raise the bar for what is expected of humans. With a better tool, we do new things,” he said.
Mr. Lutke added, “AI replaces tasks, not jobs.”
OpenAI’s research is less sanguine on this point. In a paper released earlier this year, the company said GPT-4 and future models could lead to the automation of certain jobs, resulting in workers being displaced. “Over time, we expect GPT-4 to impact even jobs that have historically required years of experience and education, such as legal services,” according to the paper.
On stage, Mr. Altman said “it’s a little bit exhausting having to always talk about only the downsides” of AI, adding he takes the risks seriously.
“One of the shorter-term concerns I have is around deep fakes, persuasion, manipulation,” he told The Globe. “Setting some regulation to address that in particular would be timely.”
Asked why he is pursuing a technology that he believes brings such substantial risk, Mr. Altman offered two reasons.
“Other countries, other actors are going to pursue it no matter what,” he said. And the potential upside is too great not to do it, he added. “I don’t think it works to just say humanity should never build AI. The benefits to education, health care, economic growth, the world, the market – people are going to demand that.”