Skip to main content

The Trudeau government went into the election last fall with two promises about “protecting Canadians from online harms.” Having won re-election, they’ve discovered that doing so is easier said than done.

One promise, to “strengthen the Canada Human Rights Act and the Criminal Code to more effectively combat online hate,” implied that they would revive a bill that died in the last Parliament, and which would have given the Canadian Human Rights Commission the power to go after alleged hate speech, under the rubric of discrimination law. We discussed the problems with that bill, which the Liberals have not yet reintroduced, in yesterday’s editorial.

The other promise, to introduce legislation within 100 days of returning to power to “combat serious forms of harmful online content,” has also stymied the Liberals. Their self-imposed deadline came and went in February; at the end of March, Heritage Minister Pablo Rodriguez said the tabling of a bill is still months away.

The delay is the result of the fact the Liberals were hit with enormous blowback when they detailed their approach to regulating online harms in a technical paper last summer.

Their plan was to target what they considered the five most egregious forms of online harms – terrorist content, content that incites violence, hate speech, non-consensual sharing of intimate images and child pornography – and lump them together in a one-size-fits-all regime.

Social-media platforms and other “online communications services providers” (OCSPs) would be obliged to identify harmful content and block it in Canada. They would also have to respond to users who flag content in as little as 24 hours, and to report content to police if they believed it to be illegal or to represent an imminent threat.

The plan would further oblige companies to preserve data related to possible illegal content, and to provide it to authorities on demand. The regime would be overseen by a Digital Safety Commissioner, with the power to raid the offices of OCSPs without a warrant, and to seize any “document, information or any other thing, including computer algorithms and software,” to determine if they were compliant with the law.

The proposal justifiably triggered fears about the invasion of privacy of social-media users, as well as concerns that companies would pre-emptively censor content left and right to keep government inspectors out of their hair. There were also worries about lumping something so clearly illegal as child porn in with hate speech, which requires a very high bar to be proven criminal.

Ottawa appears to have listened to the criticism. In March, it set up an advisory group to help it take another stab at its proposal, and said this time it will focus on a “systems-based approach,” rather than a content-based model.

Under a systems-based approach, Ottawa said it would impose a “duty of care” on internet companies, and require them “to take reasonable steps to introduce tools, protocols and procedures to mitigate foreseeable harms arising from the operation and design of their services.”

This is the direction Britain and the European Union have chosen. Instead of looking for ways to oversee every tweet, post or video, governments would require companies to have policies to reduce potential harms, and to co-operate with police when there is a clear breach of the law, such as child pornography.

It’s a smarter approach, because it’s a more limited approach. Many online platforms have a business interest in engaging users by pumping up controversial posts, and there’s a growing sense that this is about as healthy for our society as a diet of pure junk food. At the same time, free speech means letting people say things that are wrong, and that will offend and upset. In the offline world, government regulation of such speech, so long as it is not criminal or libelous, falls somewhere between questionable and unconstitutional.

It’s best to leave it to companies, but have governments impose a duty of care on them. Private platforms are within their rights to remove content they deem unacceptable or harmful to their business. They can even do likewise with users: see Donald Trump on Twitter – or rather, you can’t see Donald Trump on Twitter. But when governments try to get into this game, they risk overreaching. That is what the Liberals must avoid.

Keep your Opinions sharp and informed. Get the Opinion newsletter. Sign up today.