Jacob Mchangama is the founder and executive director of the Danish think tank Justitia and the host of the podcast Clear and Present Danger: A History of Free Speech. He is the author of the new book Free Speech: A History from Socrates to Social Media, from which this essay is partially adapted.
Much ink has been spilled over l’affaire Joe Rogan and whether the gradually more successful attempts to have him purged from Spotify due to COVID misinformation – now bolstered with accusations of racism – constitute censorship. From a legal point of view the answer is clearly no. As a private platform Spotify is protected – not bound by – the First Amendment in the United States and can freely decide which artists it wants to host, just as Twitter and Facebook are free to kick off the president. However, what’s missing from the endless debate about content moderation on Big Tech platforms is that a society’s commitment to free speech ultimately depends on a robust culture of tolerance. The principle of free speech will be seriously eroded if the practise of this freedom is stunted by orthodoxy and self-censorship in the fora where the right to speak, write, read and listen is supposed to be exercised.
This is not a new insight. In On Liberty, John Stuart Mill argued that protection “against tyranny of the magistrate” was insufficient to ensure a vibrant public sphere. He was equally wary of society’s tendency to “impose the tyranny of the prevailing opinion and feeling” by other means such as social and peer pressure. Almost a century later, George Orwell warned that, “Unpopular ideas can be silenced, and inconvenient facts kept dark, without the need for any official ban” when the public sphere is dominated by a centralized press owned by a few wealthy men. The American orator Frederick Douglass witnessed first-hand how attempts by defenders of slavery to silence abolitionist activism made his right to free speech illusory.
Even if free speech champions of the analogue age were wise to the dangers imposed by private threats to free speech, the digital age has only exacerbated the problem. Meta alone has about 3.5 billion users across Facebook, Instagram and WhatsApp, while in 2021 YouTube had an estimated 1.86 billion users. Accordingly, these platforms have enormous power to decide the practical limits of free speech on a global level. But content moderation at scale is inherently difficult.
On the one hand conspiracy theories and extremist speech can help incite violence such as the attack on the Capitol on Jan. 6, 2021. On the other hand, policies aimed at tackling harms tend to undergo “scope creep.” Take the issue of COVID-misinformation that first landed Mr. Rogan in hot water. Between April and June, 2020, Meta alone deleted seven million posts from Facebook and Instagram citing COVID-19 misinformation. During the same period YouTube reported 11,401,696 removals, a dramatic 93-per-cent increase from prepandemic levels at the end of 2019.
The purged content was not solely disinformation that could lead to imminent physical harms such as encouraging others to drink bleach. YouTube deleted a clip with prominent Stanford professor of epidemiology John Ioannidis, who early on argued for a less draconian response than lockdowns.
YouTube´s automated content moderation also removed as misinformation criticism of the Chinese government´s handling of the pandemic. The so-called lab-leak theory speculates that the pandemic may have originated at China´s Wuhan Institute of Virology. Facebook initially deleted lab-leak content as a crazed conspiracy theory, only to reverse course after independent researchers used social media and blogs to provide information that suddenly made the theory less implausible.
The dangers of fighting online disinformation through deletions was given prominent backing by a January, 2022, report by the British Royal Society, the U.K.´s most prominent scientific institution. The report emphasized that misinformation can cause real offline harms, but also warned against relying “on content removal as a solution to online scientific misinformation.”
The authors cautioned that deletions are not only inefficient, but may even be counterproductive and undermine rather than strengthen trust in reliable sources of scientific information. When deleting scientific misinformation therefore, the cure may quite literally be worse than the disease.
It is true that Mr. Rogan and other famous multimillionaires with armies of dedicated followers have the means to resist the worst consequences of online purges. However, for every Joe Rogan there are thousands of ordinary people who don’t have the same immunity and risk being fired or disciplined for offending majority opinion. Despite enjoying the most robust legal protection against government censorship in the world, 62 per cent of Americans are afraid to express political opinions, according to a 2020 Cato Institute survey.
Moreover, when private platforms delete controversial content according to opaque terms shaped by outbreaks of viral outrage, it not only impacts the speaker. Frederick Douglass argued persuasively that restricting free speech is a “double wrong” that violates both “the rights of the hearer as well as those of the speaker.” In the digital age the hearers consist of potentially millions of people who might appreciate perspectives and voices not usually covered in traditional media, and who may very well be able to distinguish between fact and fiction and to be confronted with “offensive” ideas without becoming extremists.
One only has to look at how authoritarian states censor social media or even shut down the internet to appreciate that the benefits of providing ordinary people access to unmediated information outweighs the downsides, even if the harms are real, substantial and more visible than ever. At the same time, it is more and more difficult to reconcile the idea of egalitarian free speech with large, centralized, corporate and increasingly algorithmically driven social-media platforms acting as the conduits of global free speech.
The solution to this conundrum is not – as some U.S. Republicans insist – to compel private platforms to uphold the speech rights of their users through laws. Such efforts are as likely to result in collateral damage to free speech as the spate of laws and bills in democracies such as Germany, Denmark and Canada obliging Big Tech to remove “illegal” or poorly defined “harmful” content. A better way forward is to provide users with more control over content, which would empower individuals at the expense of centralized platform control.
But most important for the future of free speech is that those of us who have benefitted from the unprecedented advances in human affairs that 2,500 years of this counterintuitive idea have helped bring about resist the force of “free speech entropy,” which pulls us toward intolerance of the ideas we loathe.
It is up to each of us to defend a culture tolerant of heretical ideas, use our capacity for critical thinking to limit the reach of disinformation, agree to disagree without resorting to harassment or hate, and to treat free speech as a principle to be upheld universally rather than a prop to be selectively invoked for narrow tribalist point scoring. Or to quote 20th-century free-speech scholar and judge Learned Hand: “Liberty lies in the hearts of men and women; when it dies there, no constitution, no law, no court can even do much to help it. While it lies there it needs no constitution, no law, no court to save it.”
Keep your Opinions sharp and informed. Get the Opinion newsletter. Sign up today.