:format(jpeg)/cloudfront-us-east-1.images.arcpublishing.com/tgam/ONVTWUVEMJM3PKZRL6RQXOAPYE.jpg)
Social-media companies often remove too much or too little content, failing to remove hate and threats, or taking down matters of public interest.Chris McGrath/Getty Images
Emily Laidlaw is the Canada Research Chair in Cybersecurity Law and an associate professor of law at the University of Calgary. She was co-chair of the expert panel on online harms and wrote this piece on behalf of the panel.
One in five Canadians has faced online hate, harassment or violence, according to an Abacus Data poll commissioned by the Canadian Race Relations Foundation. It’s worse for women, 2SLGBTQ+ persons, and racialized communities. The Canadian Child Protection Centre reports a 150 per cent increase of sextortion of youth in the past six months alone.
The government has been considering online safety legislation to regulate social-media companies for at least three years. Fully 80 per cent of Canadians support it, but not everyone agrees on a solution.
While the government was deliberating, we saw disinformation and misinformation give rise to COVID conspiracies that fuelled the so-called “freedom convoy.” Hate crimes have become more commonplace than automobile injuries, and tragically, we saw online hate inspired mass murders.
Some of the controversy has merit. Many people disagree about whether companies should report all suspicious posts to law enforcement (they shouldn’t, except narrowly about child exploitation); the unintended consequences of requiring social media to remove hate speech; and whether law should be used to address lawful but harmful expression, such as misinformation, eating-disorder promotion and glorification of violence and extremism.
However, the core building blocks needed in online-harms legislation are significantly clearer. Perhaps surprisingly, as part of Heritage Canada’s expert advisory panel on online harms, we achieved consensus on many of the key features of legislation.
The legislation needs to ensure that technology companies have a duty to act responsibly. The way that these sites are designed, such as the algorithms that recommend some content and demote others, content moderation policies, and monetization and advertising structures, all make decisions about how we exercise our rights and our exposure to harm. We’re all impacted, and experience harm, whether we use social media or not.
Platforms should have a duty to protect human rights and protect users from harm. They should be legally required to demonstrate their compliance and face monetary penalties for failure to meet the standard of care.
This is about more than the legality of a single post, but rather about platforms being held accountable for their systemic risks of harm and their responsibility to respect human rights.
We need an ombudsperson (a type of regulator) who is independent from the government and has the power to investigate whether a company is meeting its duties, including the power to audit practices, recommend corrective action and impose fines.
Importantly, the ombudsperson should play a role educating the public and working with stakeholders to develop codes of practice and advisories in a swiftly changing technological environment. The office needs to be well-resourced and powerful, so judicial review would be an important check on that power, such as the ability to apply for a hearing in certain circumstances.
Platforms have for too long operated as black boxes, and there is a profound asymmetry between the knowledge they have about their users (us) and our collective knowledge about how their products are shaping society. Their data should be made available to researchers and civil society to hold platforms accountable and enable consumers to make informed choices about the technologies they use.
Users are at the mercy of companies and their wildly variable content moderation practices. Social media regularly remove too much or too little content, failing to remove hate and threats, or taking down matters of public interest.
However, it is not feasible for every item posted online to be adjudicated by an ombudsperson, although a narrowly scoped recourse body, focused for example on child protection or intimate-image abuse, has merit. It’s the job of the ombudsperson to hear from victims to guide their investigations to make systemwide changes.
Victims shouldn’t be asked to do a homework assignment to defend their own humanity. It’s the duty of social media to design their platforms in a way that protects users from harm and provide fulsome complaints mechanisms, and it’s the job of government to set what that standard should be.
For example, automated tools are crucial to prevent and remove child sexual-abuse images, and violent and extremist content at scale. But automation is also imperfect. There should be a meaningful appeals process to re-evaluate that content to protect freedom of expression.
This legislation will not be a magic bullet. It won’t remove all the online toxicity and violence, nor will it ensure that freedom of expression and privacy are fully protected. However, this legislation can still make a difference. Canada is long overdue in passing laws in this space. Legislation that holds social-media companies accountable, even the slightest, will have a much-needed impact.