Skip to main content
opinion

Stephanie MacLellan is a research associate with the Centre for International Governance Innovation, specializing in Internet governance and cybersecurity.

In the aftermath of last weekend's deadly protests in Charlottesville, tech companies have been blocking far-right extremist groups from their services. This has led to a debate over freedom of expression on the Internet, and the role of companies such as Facebook, Twitter and Google in limiting it.

But if it was a different violent extremist group – say, the so-called Islamic State – there would be no debate. In fact, tech giants have been booting off IS supporters for more than a year and disrupting their networks on social media, and there has been no serious outcry.

Why should one group of violent extremists be treated differently than another?

Far-right domestic terrorists – including white supremacists, neo-Nazis and self-declared sovereign citizens who don't recognize government authority, among others – pose at least as much of a threat in North America as Islamic State terrorists. Of the 85 deadly terrorist attacks in the United States since 2001, 73 per cent were committed by far-right extremists, compared to 27 per cent by Islamist extremists. But it took the tragedy in Charlottesville, where one person was killed and several more injured after a car plowed into a crowd of counter-protesters, to thrust the threat of far-right violent extremists into the spotlight.

In the following days, the Daily Stormer neo-Nazi website was kicked off a series of web hosting providers. Facebook deleted a number of far-right pages with names like "White Nationalists United" and "Right Wing Death Squad". The Discord chat app shut down several far-right groups. Even OkCupid, the online dating site, cancelled the account of one prominent white supremacist and banned him for life.

While many cheered, these developments also raised difficult questions about how far digital companies should go in silencing hateful content.

Attempts to police hate speech on online platforms often cause as many problems as they solve. As revealed by ProPublica, Facebook's policy on hate speech protects some identifiable groups, but not subsets of those groups. As a result, you can have your account suspended for directing vitriol at "white men", but not at "Black children," or in some cases "migrants." Some Black activists and scholars of extremism have also complained that their social media posts detailing incidents of racism or explaining new terrorist propaganda have been blocked by various platforms. There are also concerns that harsh new anti-hate speech laws in Europe will result in companies taking down more legitimate content for fear of incurring massive fines.

On the other hand, tech companies seem to be more consistent and motivated when it comes to saving lives from terrorist attack, as seen in their responses to IS. After Twitter became notorious as the Islamic State's preferred medium for recruitment and propaganda, the company deleted more than 630,000 terrorist accounts between August 2015 and December 2016. This sustained campaign seems to be having an effect: A recent study from the VOX-Pol European think-tank found that networks of IS users on Twitter were decimated. Those who wanted to stay on Twitter adopted innocuous avatars and screen names and toned down the content of their tweets, which severely diminished their online identity and propaganda value. Facebook, Google and YouTube have also introduced new steps to remove terrorist content.

Rather than a matter of free speech on the Internet, the question of far-right extremism online should be seen as a matter of preventing ideologically-driven violence in the real world. That's the threat posed by IS supporters when they use Twitter to convince Western teenagers to join the so-called caliphate in Syria, and that's what Charlottesville organizers did when they used Facebook and Discord to plan their gathering of white supremacists.

Stifling expression, even hateful expression, should never be taken lightly, and tech companies should implement consistent and transparent policies for removing any kind of content. But when lives are at stake, inaction is not an option.

Interact with The Globe