Skip to main content
Open this photo in gallery:

Federal Industry Minister Francois-Philippe Champagne speaks at the All In artificial intelligence conference on Sept. 27.Ryan Remiorz/The Canadian Press

Aaron Wudrick is the domestic policy director at the Macdonald-Laurier Institute.

Whatever its other strengths, moving quickly is not an attribute that many people would associate with government. That’s even more apparent when it comes to attempts by governments around the world to develop regulatory frameworks governing artificial intelligence (AI) – an area that is evolving so quickly that laws risk being rendered obsolete before they even come into effect.

Canada’s contribution to this struggle is contained in Bill C-27, which proposes to implement the Artificial Intelligence and Data Act (AIDA). But while the AIDA aims to establish comprehensive guidelines for AI systems, with an emphasis on “citizen trust and global interoperability,” its overly broad impact-based approach risks placing Canada among the most restrictive AI markets worldwide. This stance is notably different from the more nuanced, risk-based strategies seen in regions such as the European Union.

The significance of formulating an effective AI policy is underscored by AI’s anticipated economic impact. With AI projected to contribute up to US$15.7-trillion to the global economy by 2030, Canada’s regulatory approach could damage our country’s economic prospects in this domain, and anything else that AI touches. Lessons from recent Canadian legislation such as Bill C-18, also known as the Online News Act, which led to Meta Platforms Inc. pulling news links from Facebook, and Google threatening to do the same, reveal challenges in aligning with the rapidly evolving digital landscape, highlighting the consequences of poorly considered policy.

The central trade-off that governments must reckon with in regulating AI (as in many areas) is how to maximize innovation while minimizing risk to the public. Here, the EU’s efforts are instructive: Its AI Act categorizes AI technologies based on risk levels, offering a model that promotes innovation and ensures public trust by focusing regulation where it’s most needed.

The EU approach categorizes AI technologies into risk tiers, ranging from no-risk to low-risk applications, which face minimal restrictions, to high-risk categories requiring stringent oversight, accuracy, human supervision, risk assessments and usage logs. This risk-based framework ensures that innovation is not unduly hampered, but also ensures technologies are deployed responsibly, particularly in sensitive areas such as law enforcement and critical infrastructure.

According to the EU, the ultimate “aim of the new rules is to foster trustworthy AI in Europe and beyond, by ensuring that AI systems respect fundamental rights, safety, and ethical principles, and by addressing risks of very powerful and impactful AI models.” And while the EU approach isn’t perfect, it represents the best attempt so far to strike the right balance.

AI stands to revolutionize various sectors, from personalized medicine in health care, to sustainable farming practices in agriculture, to better urban planning for cities. AI will have wide-ranging applications, and an overly cautious approach by Canadian policy makers will delay, if not deny, Canadians their benefits.

For Canada, adopting a similar risk-based model would not just align with global standards, but also unlock AI’s potential across sectors, driving economic growth. It would position Canada as a leader in AI, ensuring Canadian companies are best placed to capitalize on its opportunities.

Unfortunately, AIDA’s current approach appears to conflate impact (how widely AI’s influence would be felt) with risk (how serious such consequences would be), and if implemented in its current form, it will not only deter innovation but risk isolating Canadian AI firms from the global economy.

Already, there is a growing chorus of voices against the proposed legislation, including the Center for Data Innovation, the Centre for International Governance and even unions such as the Canadian Union of Public Employees (CUPE).

Some critics fear the bill will lead to government overreach and further bureaucratic bloating. Speaking last November to Parliament’s standing committee on industry and technology, McCarthy Tétrault senior counsel Barry Sookman called Bill 27 “fundamentally flawed” and “an affront to Parliament.”

“AIDA sets a dangerous precedent,” Mr. Sookman told the committee. “What will be next? Fiat by regulation for quantum computing? Blockchain? The climate crisis or other threats? We have no idea … [the bill] paves the way for a bloated and unaccountable bureaucracy.”

As Canada charts its AI future, it is imperative to embrace a framework that not only protects but also empowers, driving forward in the AI-charged era with a strategy that is innovative and inclusive, aligning with emerging global best practices and ensuring Canada’s continued participation in the global AI vanguard.

Unfortunately, Bill C-27 simply doesn’t compute.

Interact with The Globe