Skip to main content
opinion

Emily Laidlaw is a Canada Research Chair in Cybersecurity Law at the University of Calgary. Taylor Owen is the Beaverbrook Chair in Media, Ethics and Communication at McGill University.

The Canadian government has tabled online harms legislation, Bill C-63 – and we are glad they have. For years, through commissions, consultations and an expert panel, we have urged the government to learn from, follow, and develop a Canadian approach to online regulation that aligns with the EU, Britain and Australia.

So how have they done?

Right now, with a few narrow exceptions, social media companies self-regulate how they handle online harms. There are no laws mandating minimum standards for content moderation, safety in product design, or requirements to address known risks. While social media has produced many benefits, this should not come with the clear harms that are all too present in our digital lives. Citizens overwhelmingly want social media, like other consumer products, to have guardrails – particularly after reports that social-media companies knew about the risks, but failed to act.

However, the sincere desire to address bad things online can lead governments to impose restrictions that risk undermining fundamental rights. Requiring the removal of hate speech, imposing mandatory age requirements, or relying solely on the criminal-justice system might seem like easy solutions, but they come at real costs to the free expression that the internet enables, and on which democracies rely.

This legislation takes a more measured approach, weighing the balance between free expression and protection from harm differently for three categories of content.

For seven types of harmful content, the government doesn’t seek a hard ban, but instead requires social-media companies to minimize the risk that their products may amplify or incentivize the distribution of this content. This responsibility is demonstrated by conducting risk assessments, implementing best practices and sharing data about the effectiveness of their efforts. This requires corporate responsibility where none currently exists in law.

When it comes to content for children, the balance leans more toward protection. The legislation requires social-media companies to follow specific design restrictions on products likely to be used by children. This could mean, for example, restricting content about eating disorders or self-harm from targeting kids, or not allowing adults to directly message teenagers.

And for two particularly egregious and relatively easy-to-adjudicate content types, the legislation takes an even harder line. Intimate image abuse (including deepfakes) and child sexual-abuse material must be taken down by platforms within 24 hours of being identified – and if they fail to do so, they face severe penalties.

The act would create a commission to oversee these new rules, an ombudsperson who would serve as a citizens’ representative, and an office to support both. The government has clearly decided that online harms cannot solely be left to the courts. The scale and complexity of the issues, the speed through which online harm occurs, limitations on access to justice, and strained judicial resources demand a different approach. And so the mandate of the commission also includes education, research and international collaboration on developing best practices.

In our view, these terms are the building blocks of a good law. That said, there are genuine issues that warrant further debate in the coming months.

First: which platforms should be regulated? Bill C-63 includes large social-media services and live-streaming or pornography sites that allow users to upload content. It does not include private messaging services, gaming and search engines. These are platforms that have facilitated tremendous harm as well, but their inclusion would create thorny rights problems, such as the risk of breaking encryption and incentivizing surveillance. Is that trade-off worth it? (Probably not, but that is a tough pill to swallow given the harms present on messaging services.)

Second: what obligations should be detailed in the legislation, and which should be left to regulation? Many will be concerned that much is being left for the commission to decide. However, technology is advancing so quickly that both harms- and risk-mitigation strategies must continually evolve. Did they get the balance right between legislative clarity, parliamentary oversight and responsive regulation? (Broadly yes, but hearing from international regulators who are already implementing such requirements at committee might provide greater clarity.)

Third: does the legislation do enough to protect fundamental rights? The commission must take them into account, but there is no similar obligation on the companies it oversees. Their duties are more narrowly focused on harm reduction. Should companies also be required to evaluate the extent to which their products and policies affect users’ rights? The government must be cautious to not overstep its jurisdiction, but some mandatory consideration of rights is important for balance, and would bring this bill more in line with the EU and Britain’s approaches.

While there are many details of this extensive legislation that will require serious debate, it gets the big things right. Here is hoping we can work together, avoid hyperbole on all sides, and work to make it as effective as possible. The issue warrants it.

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe