Skip to main content

Jonathan Roberge holds the Canada Research Chair in New Digital Environments and Cultural Intermediation. He is an associate professor at the Institut national de la recherche scientifique and is scientific head of the Laboratory on New Digital Environments and Cultural Intermediation.

Fenwick McKelvey is an associate professor in information and communication technology studies in the department of communication studies at Concordia University.

Last week’s federal budget committed $443.8-million over the next 10 years to renew the Pan-Canadian Artificial Intelligence Strategy. By chance, the budget coincided with the European Union’s release of proposed AI regulation. Comparing the two shows that Canada is playing a risky game by avoiding a robust, rights-based approach to AI governance.

AI is now firmly part of how society is governed, and the EU approach is a clear interventionist legal framework meant to address AI’s complexity, unpredictability and autonomous behaviour. Their approach bans certain applications of AI, notably most uses of facial recognition in public space, stipulates high-risk activities and then calls for better codes of conduct and assessment tools for low- or moderate-risk uses.

The prohibitions on AI are welcome and sensible. For many Canadians worried about AI after watching the hit docudrama The Social Dilemma, the EU’s prohibition on AI intended to manipulate people’s behaviour or exploit their vulnerabilities will sound eminently sensible.

High-risk uses must also meet obligations for oversight, documentation and explainability. AI’s use in the workplace, in education and government all are designated high-risk, casting critical attention to applications of AI in these areas in Canada. Even low-risk applications have to be more transparent if the system interacts with the public.

By contrast, Canada’s approach to AI regulation resembles a house of cards. At the federal level, Canada has haltingly implemented risk assessments for AI use by the federal government. Reforms to the privacy law propose limits on algorithmic decision making. We are also waiting for a reaction to the Office of the Privacy Commissioner of Canada’s draft regulation on AI, a program the budget did not mention. The budget does deliver on the government’s promise of a data commissioner tasked with protecting the integrity of Canadians’ data.

But Canada’s unco-ordinated approach is fragile. The reforms to the privacy law carve out wide exemptions for deidentified data – a process often accomplished through AI – encouraging further use of AI rather than setting limits. The Globe and Mail’s own reporting has shown the adoption of the government of Canada’s AI assessment tool is spotty at best. The data commissioner’s mandate is as much about business development as data protection.

Meanwhile, millions of Canadians are part of experiments with new AI tools such as Bell Canada’s confidential system to filter malicious and fraudulent spam calls. Canada’s approach is ad hoc, with ample room for interpretation and gaming the scattershot rules.

The EU’s regulation is flush with discussions of rights. Rights are first on the list of assessing AI’s harm. There are clear principles, too. AI should not be used to undermine human dignity and must respect the private life of personal data. The same language is missing in Canada, with Bill C-11 only giving a mention of privacy as a human right.

The Canadian budget doubles down on funding AI even when the bet has come up short recently. Founded by one of Canada’s “fathers” of AI, Element AI was Canada’s ace. But its sky-high valuation folded when it sold for a reported low US$230-million to ServiceNow, a company best described as offering business analytics or employee surveillance –arguably high-risk activities proposed in the EU.

To be sure, the EU’s regulation is not perfect. Member states still need to implement the regulation. What exactly counts as a manipulative use of AI is still open to interpretation. Monitoring and enforcement are unproven, but at least the EU knows the stakes with its focus on harms and clear bans.

By continuing to fund AI without developing better governance of it, Canada is letting the industry write its own rules.

Your time is valuable. Have the Top Business Headlines newsletter conveniently delivered to your inbox in the morning or evening. Sign up today.

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe