Skip to main content
Open this photo in gallery:

The House of Commons chamber is seen empty in Ottawa on April 8, 2020. Many of the Liberals' proposed laws to safeguard Canadians’ digital rights and curtail the power of Big Tech platforms took a back seat to pandemic policies.Adrian Wyld/The Canadian Press

Canada is falling behind on protecting citizens from the negative effects of digital platforms that use algorithms and artificial-intelligence systems to make recommendations and decisions, technology specialists say – but the country has a chance to seize the opportunity when Parliament resumes in the coming weeks.

Prime Minister Justin Trudeau’s snap election call last summer left a handful of digital-regulation bills stalled in the House and Senate, including legislation to address harmful content and to align Canadians’ data and privacy rights with more progressive jurisdictions such as the European Union and California. With a new cabinet set to be named Tuesday, the government is expected to revive much of its stalled legislation.

It’s been a year since the Liberals introduced a blitz of proposed laws to safeguard Canadians’ digital rights and curtail the power of Big Tech platforms, but many of the initiatives took a back seat to pandemic policies. In that time, scrutiny of AI and algorithmic decision making has only grown and fears about the future of digital rights have extended further beyond privacy.

Recent revelations by Facebook Inc. whistle-blower Frances Haugen, The Wall Street Journal and a consortium of other news outlets, as well as others who did subsequent research including U.S. Senator Richard Blumenthal, include that the company’s algorithms could incite hate speech and promoted accounts that lauded eating disorders.

There are ways to reduce these kinds of problems, experts say. “It’d be incredibly beneficial if consumers could choose different algorithms – different ways of ranking or looking at things,” said Ron Bodkin, the chief information officer of the Vector Institute for Artificial Intelligence and engineering lead for the Schwartz Reisman Institute for Technology and Society. “If you allowed market choice, you could have some great outcomes.”

The Liberals introduced a bill last November to enact legislation called the Consumer Privacy Protection Act, which would have brought Canada close to the EU and California in terms of data protection – but also would have mandated companies to publicly clarify how their algorithmic or AI-powered “automated decision systems” work. These are the kinds of systems that often use data about consumers to predict what kind of content they’d like to see or recommend items for reading or purchasing – and have only drawn greater criticism in the 11 months since the bill was introduced.

The bill fizzled alongside the previous Parliament, but experts including Mr. Bodkin said the re-formed government has a chance to more strongly regulate these kinds of systems. Beyond mandating algorithmic choice for consumers, he also believes governments such as Canada’s could grant qualified researchers the chance to audit platforms’ decision-making systems, to find and quantify risks.

“Trying to in-source an audit and inspection function in the government, I don’t think that would work as well as empowering existing institutions where there’s capable people who can actually do a lot,” Mr. Bodkin said. Academics, journalists and other researchers are already trying to do this work, he added, and audit powers would let them do so with greater privileges. (The Vector and Schwartz Reisman institutes have advised Ontario’s government on matters such as AI.)

Canada’s private-sector privacy law, the Personal Information Protection and Electronic Documents Act, is two decades old and does not reflect the challenges that modern platforms present to Canadians’ digital rights.

The country now faces both global and domestic challenges as a result. Europe’s General Data Protection Regulation, or GDPR, requires strict privacy stipulations when its residents’ data flow to other jurisdictions; Quebec just adopted its own provincial privacy regime with greater protections, and Ontario is considering its own.

“We need something at the federal level, especially in comparison with Quebec,” said Céline Castets-Renard, a university research chair who focuses on accountable artificial intelligence at the University of Ottawa. And when it comes to automated decision-making systems, she said, “We need more ambition.”

GDPR, for example, grants people the right “not to be subject to a decision based solely on automated processing” in instances where there might be legal consequences. Added Dr. Castets-Renard: “It’s a question of human control and human oversight.”

Last week, the federal NDP insisted Canada set up an independent social-media ombudsman to protect against online hate. Party ethics critic Charlie Angus cited Facebook as an example of a platform whose algorithms help drive profit to the detriment of its users.

Facebook leaders, including Canadian policy staff, have called for more regulation since its whistle-blower revelations were made public, and have said its tools for users to control what they see on the platform are industry-leading.

“In the long run, people are only going to feel comfortable with these algorithmic systems if they have more visibility into how they work and then have the ability to exercise more informed control over them,” Facebook’s global-affairs vice-president, Nick Clegg, said in an e-mailed statement.

The social-media company has also taken pains to outline its own internal responses to concerns around harmful content, including using AI technology to monitor and block hate speech.

But debate over whether governments should regulate the sector or whether the sector should self-regulate has transcended numerous decades and industries. “I don’t think self-regulation would be sufficient here,” Dr. Castets-Renard said.

Pavel Abdur-Rahman, IBM Canada’s head of data, AI and ethics, said regulation is necessary but should be seen as the lower ceiling for corporate behaviour. In a competitive market, he argued, companies should want to demonstrate their own capabilities as responsible actors and reach a higher standard.

But he added that governments should approach platform and AI regulation like they do nuclear regulation, with strict global standards that allow for safe usage and minimize harms. The wrong route, Mr. Abdur-Rahman said, would be to try regulating algorithms and AI like with climate change – where “we are not doing it fast enough, we are not coming together as humanity.”

John Power, a spokesperson for François-Philippe Champagne, the most recent Innovation Minister, who may return to the role this week, said in an e-mail that the government would announce more about “plans for potential changes to the laws and regulations governing the handling of Canadians’ personal data” once the new cabinet is sworn in.

Your time is valuable. Have the Top Business Headlines newsletter conveniently delivered to your inbox in the morning or evening. Sign up today.

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe