Skip to main content
opinion

Brandon Ambrosino is a freelance writer in Delaware

Open this photo in gallery:

Faith Goldy, who was supposed to speak at Wilfrid Laurier University but was interrupted by a fire alarm, speaks outside the university on March 20, 2018.Hannah Yoon/The Canadian Press

Imagine the transhumanists, predictors of the future, get it right – and at some point, we exist as nothing more than code in a cloud. We can’t pick flowers or hug our partners, but we can meet new people and have lively conversations with them. Imagine one of these conversations turns into a yelling match. Things get heated. Insults are exchanged. Onlookers join in.

Then imagine someone violates one of the rules of the platform, and is swiftly reported to the authorities – who are themselves nothing more than information. The person is banned from the site, or perhaps from a host of sites, or perhaps from the cloud altogether. A statement is released about the banning: “The individuals and organizations we have banned today violate our standards surrounding what constitutes organized hate, and they will no longer be allowed a presence on our services.”

The offending party would be no-platformed. No-presenced.

If, in fact, the only place where someone exists in the future is on a platform and you take that platform away from them, then you’ve killed them.

Yes, this is an extreme vision, according to the techno-optimists and sci-fi writers among us – although one part is very real: the above quote comes directly from a recent Facebook announcement. But is the step from one kind of existence to another any larger than the one from bloodletting to modern medicine, or from horses to self-driving cars? The line between Now and the Future is blurrier than we realize; when we cross that divide without realizing it, we’re unprepared for what we encounter.

We cross this line every day; social media is a world that none of us were prepared for. Take Mark Zuckerberg’s recent post titled, “Four Ideas to Regulate the Internet” – as if the internet is only recently in need of regulating; as if people hadn’t already cast their votes for a U.S. President because of a thing called Pizzagate; as if extremists haven’t already found a perfect platform for radicalizing users.

Twitter bans white-nationalist group Canadian Nationalist Front

Opinion: Facebook finally takes a stand against white nationalism. Will Twitter ever get the memo?

Facebook bans several Canadians for supporting white nationalism as Ottawa eyes new regulations for social platforms

As part of its recent stand against such radicalization, Facebook announced on Monday that Faith Goldy, the former Toronto mayoral candidate, was banned from its platform, citing her as a leader of “organized hate,” which the company defines as any association of people holding ideologies “that attack individuals based on characteristics.”

Banning Ms. Goldy is in line with the Facebook CEO’s goal of eradicating “harmful content” from his platform. “Harmful content” is his catch-all term for terrorist propaganda, hate speech “and more” – the latter being an ambiguous term that could refer to literally anything that Mr. Zuckerberg and his team decide it means.

This should give us pause. Can the problem of internet hate really be tackled or even mitigated by a team of techies in dialogue with a team of experts, as Mr. Zuckerberg calls them? In a post titled “Standing Against Hate,” the word hate is used 12 times and defined zero times.

Different people will have different versions of hate. Some gay people, such as myself, might feel uncomfortable when some conservative Muslims talk about homosexuality in degrading terms, and some of us might describe such speech as hateful. Similarly, some of those same Muslims might feel that gay people are being hateful when we condemn heterosexist elements found in their traditions. But are we guilty of hate? What if several of us start a public discussion online; is our hate then organized?

Any one-size-fits-all definition of hate is not going to suffice. Human speech is a complicated affair. “Words strain,” T.S. Eliot writes, and they “won’t stay still.” At any point in a conversation, I can tell it to you straight, or tell you a joke, or quote a third party, or quote someone who once quoted a third party, or I might sarcastically say what I mean by saying what I don’t mean. Human beings have evolved a highly developed capacity for communication, and regulating that – on a digital platform, no less – runs the risk of getting it very wrong.

To his credit, Mr. Zuckerberg is upfront about this, noting that given the scale of the platform, the company will “always make mistakes and decisions that people disagree with.” But if that’s the case, then why not let us be the ones who make the mistakes? Regulating speech and behaviour from the ground up, rather than by coercion, is the way to initiate the most meaningful change.

Let’s be clear. White nationalism and any other ideology that seeks to protect a “white race” from non-whites is evil, and deserves fierce condemnation (as well as lessons in history and biology). Racism, under any name, is shameful and is a stain upon the human race. That’s not up for question. What is, however, debatable is whether “no-presencing” such people online will have any effect on what they do offline. Banning voices from the conversation is one surefire way of not getting them to go away. U.S. cultural elites have spent the past decade running “deplorables” out of the public square, and were shocked to discover they’d returned wearing red baseball caps.

The greater concern, though, isn’t with who is banned, but isn’t. Entire social-media platforms – especially Twitter – seem perfectly geared toward hating everyone.

Sure, the haters might not be using the language of white nationalists, but that doesn’t matter to the person being bullied. What matters is that the hate they endure feels every bit as real as what Mr. Zuckerberg classifies as organized hate.

The worry, then, is that Facebook will send a message that not all hate is bannable – which means that some hate is acceptable. If we make a big show of banning some haters instead of others, do we end up giving some kind of endorsement to those who remain? “Thank you for reporting Deborah’s Facebook post. We’ve reviewed it and found that her hateful message isn’t hateful enough to remove her from our platform.”

Mr. Zuckerberg is responding to the very real and growing threat of white nationalism, which is often linked to unspeakable acts of violence. And yet, white nationalism, as evil as it is, is still only one of the many problems we encounter on social media. Facebook has to start somewhere, obviously, and the company is going to “make mistakes,” but at the very least they could start by defining their terms.

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe