Skip to main content

Chris Tenove is a postdoctoral fellow of political science at UBC. Heidi J.S. Tworek is an assistant professor of history at UBC. Fenwick McKelvey is an associate professor of communication studies at Concordia University. They are co-authors of the recent Public Policy Forum, Poisoning Democracy: What Canada Can Do About Harmful Speech Online.

It is increasingly clear that online speech contributes to offline violence and fear. In the United States, demonization and denigration have become regular parts of political discourse, whether the targets are political opponents or scapegoated groups such as Jewish congregants, migrants fleeing Central America or outspoken women. Hatred and fear on social media have led to violence in Myanmar, Sri Lanka, Kenya and elsewhere.

Canada has not avoided these developments. Online hatred seems to have partly motivated the 2017 mass shooting in a Quebec mosque and the 2018 vehicle attack in Toronto. More broadly, right-wing extremism is increasing rapidly online.

Story continues below advertisement

Hate, abuse and harassment are all forms of what we call “harmful speech.” Harmful speech is not limited to social media, but these platforms can make it easier for hateful ideologies to spread, and for individuals to target other users with threats of violence. Foreign actors, too, have found social media platforms a convenient means to pursue political aims, including by promoting social conflict on issues of race, religion and immigration.

Canada has laws to address some of the most problematic forms of harmful speech, including hate propaganda, threats of violence and foreign interference in elections. The agencies responsible for enforcing these laws need the resources and political backing to take stronger action.

However, the social media companies themselves have a critical role to play. Right now, the vast majority of harmful speech is dealt with (or not) through the enforcement of platforms’ own community guidelines or standards. These policies have been developed in response to tragedies, user complaints, company branding exercises, and – to an extent – national laws. Two figures show the scale of this issue. In the first three months of 2018, Facebook took action on 2.5 million pieces of hateful content. Between April and June this year, YouTube users flagged videos as hateful or abusive more than 6.6 million times.

Despite their laudable efforts, platforms struggle to enforce their content moderation policies in ways that are timely, fair and effective. Just a few days after 11 people were killed in a mass shooting at a Pittsburgh synagogue, Twitter allowed “Kill all Jews” to trend as a topic on the platform during an alleged hate crime in Brooklyn. And when social-media companies do apply their policies to high-profile users, such as when multiple platforms banned Infowars’ Alex Jones, they can face a backlash and even threats of government action.

Platform companies cannot solve these problems alone. They need clearer guidelines from governments, and greater assistance from civil society groups and researchers. In return, they need to be more transparent and responsive to the individuals and communities affected by their policies.

We make three recommendations to pursue those goals in Canada.

First, the federal government should compel social media companies to be more transparent about their content moderation, including their responses to harmful speech. Some platforms are doing much better than just a year ago. However, it should not be up to their own discretion to inform Canadians about how our online speech is being governed.

Story continues below advertisement

Second, governments, foundations, companies and universities need to support more research to understand and respond to harmful speech, as well as the related problem of disinformation. Other democracies are doing a much better job than Canada in this area.

Finally, we propose a Moderation Standards Council. Similar to the Canadian Broadcast Standards Council, the council would convene social media companies, civil society and other stakeholders to develop and implement codes of conduct to address harmful speech. The council would share best practices, co-ordinate cross-platform efforts and improve the transparency and accountability of content moderation. It would also create an appeal processes to address complaints. We believe such a council would provide a fairer, better co-ordinated, and more publicly-responsive approach to harmful speech online.

Our recommendations strike an appropriate balance between the protection of free expression and other rights, recognizing that expression is not “free” for people who face hate, threat, and abuse when engaging in public debates. Our recommendations also balance public oversight with industry viability. More co-operation on these issues with government and civil society makes good business sense for soical media companies.

Above all, we hope to foster broader public debate on this issue. Responses to harmful speech should not be decided for us in Silicon Valley boardrooms or in offices on Parliament Hill alone. The rules for speech online should be subject to public input and oversight. The poisoning of democracy is a serious and complex problem. It should be addressed democratically.

Report an error
Comments

Welcome to The Globe and Mail’s comment community. This is a space where subscribers can engage with each other and Globe staff. Non-subscribers can read and sort comments but will not be able to engage with them in any way. Click here to subscribe.

If you would like to write a letter to the editor, please forward it to letters@globeandmail.com. Readers can also interact with The Globe on Facebook and Twitter .

Welcome to The Globe and Mail’s comment community. This is a space where subscribers can engage with each other and Globe staff. Non-subscribers can read and sort comments but will not be able to engage with them in any way. Click here to subscribe.

If you would like to write a letter to the editor, please forward it to letters@globeandmail.com. Readers can also interact with The Globe on Facebook and Twitter .

Welcome to The Globe and Mail’s comment community. This is a space where subscribers can engage with each other and Globe staff.

We aim to create a safe and valuable space for discussion and debate. That means:

  • All comments will be reviewed by one or more moderators before being posted to the site. This should only take a few moments.
  • Treat others as you wish to be treated
  • Criticize ideas, not people
  • Stay on topic
  • Avoid the use of toxic and offensive language
  • Flag bad behaviour

Comments that violate our community guidelines will be removed. Commenters who repeatedly violate community guidelines may be suspended, causing them to temporarily lose their ability to engage with comments.

Read our community guidelines here

Discussion loading ...

Due to technical reasons, we have temporarily removed commenting from our articles. We hope to have this fixed soon. Thank you for your patience. If you are looking to give feedback on our new site, please send it along to feedback@globeandmail.com. If you want to write a letter to the editor, please forward to letters@globeandmail.com.
Cannabis pro newsletter