Industry Minister François-Philippe Champagne says Canada will consider creating an AI safety institute as part of a global effort to better regulate artificial intelligence.
In an interview at the AI Safety Summit at England’s Bletchley Park, Mr. Champagne also defended the federal government’s approach to regulating AI, which has faced criticism for being too vague. He insisted that Canada has been at the forefront of regulation and that the government has won praise for its proposed Artificial Intelligence and Data Act from renowned experts such as Yoshua Bengio at the Université de Montréal.
“When you have experts like him who say, ‘This is the right time, this is the right legislation and Canada should go forward’, you know, I appreciate there might be other voices, but I take comfort from the fact that you have one of the leading Canadian experts” supporting the measure, he said.
Critics have argued that it’s not clear which AI systems would be covered by AIDA, and that the commissioner responsible for enforcing the legislation would not be independent. They have also called for more public consultation about the proposed law.
“AIDA needs to be scrapped completely,” former Blackberry Ltd. co-chief executive Jim Balsillie told a parliamentary committee this week.
Mr. Champagne said it was critical to make the legislation broad because AI models are constantly changing. “If you’re too prescriptive, you take the risk of also being irrelevant. I mean, even here, I’m surrounded by the best minds in the world on AI, and I’ve been in sessions, and no one knows where they could be in five or 10 years from now.”
He said he was opened-minded about the role of the commissioner, but insisted that the bill had struck the right balance. As for Mr. Balsillie’s comments, Mr. Champagne said he put more faith in Dr. Bengio’s assessment. “It’s very tough to challenge a guy like him,” he said. “Jim I like. But have you seen Jim positive on something?”
Dr. Bengio has been supportive of the proposed law but he has also called for it to be enhanced “to include key elements such as the labelling of AI-generated synthetic content, setting up a licensing and model registration regime, defining safety obligations and requiring independent audits.”
Mr. Champagne was among representatives from 28 governments who attended the two-day safety summit at Bletchley Park, site of a famous team of code breakers during the Second World War. The summit also attracted participants from dozens of tech companies, academic institutions and non-profit organizations.
The meeting focused mainly on the long-term challenges posed by frontier AI – the most advanced models – and how to regulate ever more intelligent systems. There was little agreement on what specific actions countries can take, either individually or collectively, and many governments appear to be going their own way.
The United States and the European Union have begun developing detailed regulations, while Britain and the U.S. have announced plans to establish AI safety institutes.
Mr. Champagne said Canada could soon follow suit with such an institute of its own. “We’re going to look into that. I think it’s a good thing.”
British Prime Minister Rishi Sunak said just bringing together more than 100 representatives from government, industry and civil society to talk about AI safety for the first time was an accomplishment. “Not only has it been a very good and thoughtful conversation, it has led to some very concrete outcomes that will ensure that we all can enjoy the benefits of AI,” he said during a press conference on Thursday.
While the summit did not produce a regulatory framework, Mr. Sunak can point to some successes.
Delegates from all 28 countries, including the U.S. and China, signed a declaration promising to work collaboratively on identifying the risks of AI and developing potential solutions.
Mr. Sunak announced that Dr. Bengio will head a global scientific project to produce a “State of the Science” report which will prioritize areas for future AI research. The work will be similar to reports issued by the Intergovernmental Panel on Climate Change.
The Prime Minister also said that the British AI safety institute will work in partnership with several leading tech companies to test frontier AI models before they are put into widespread use. The institute won’t act as a regulator, and it’s not clear how much access it will have to new technology, but Mr. Sunak insisted it was a critical first step toward shaping government policy.
The institute’s industry partners so far include Google DeepMind and Anthropic, an AI safety and research firm based in San Francisco.
Mr. Sunak also tried to address the public’s growing unease about potential job losses caused by AI. “We should look at AI much more as a co-pilot than something that necessarily is going to replace someone’s job,” he said. “AI is a tool that can help almost everybody do their jobs better, faster, quicker.”