P.W. Singer is a strategist at New America, a consultant for the U.S. military, and the author of several books including Wired for War: The Robotics Revolution and Conflict in the 21st Century. Emerson Brooking is an expert on conflict and social media who served, most recently, as a research fellow at the Council for Foreign Relations. They are the authors of LikeWar: The Weaponization of Social Media.
There was a time that measles – terribly painful and one of the most contagious viruses of all – infected scores of children each year. In Canada, the count reached its high point in 1935, when 83,135 cases were reported. The virus killed hundreds each year and left scores more with hearing loss and brain damage. Across the border in the United States, the case numbers reached as high as eight million. That all changed in 1963, when scientists developed a cheap, effective vaccine. By 1998, the disease was effectively defeated within Canada, with only 12 cases total reported, all within a small religious community that opposed immunization and a set of international travelers. In the United States, the government would go so far as to declare that measles had been eliminated.
And yet, it wasn’t. By 2015, the disease had made a comeback, joined by a broader 30-per-cent rise in vaccine-preventable diseases, which collectively had risen to 8,010 cases in Canada. Then came the news that 147 children were suddenly diagnosed with measles after visiting California’s Disneyland in 2015. This caused further ripple effects, with one case from Disneyland coming back to Quebec and leading to 157 secondary cases.
What had happened? Measles hadn’t magically gotten stronger, nor had the vaccine become less effective.
The answer could be found on the very same technology that most worried parents were now using to search for information on the disease: the internet. Specifically, it could be found on the network of “anti-vaxxer” groups that emerged online, utilizing the new platforms of social media to organize and spread their messages viral. They claimed that vaccines caused autism and that childhood vaccination was either blasphemous, a corporate scam or a secret genetic engineering experiment. Thanks to these online communities, the anti-vaxxer movement had exploded in popularity, becoming ever more militant and conspiracy-minded in the process. It was then joined by prominent celebrities, including a certain real estate tycoon turned reality show host (and since turned President), who saw the controversy as a way to draw attention to his own brand (@realDonaldTrump tweeted out to his millions of followers in 2014 “Healthy young child . . . gets pumped with massive shot of many vaccines, doesn’t feel good and changes – AUTISM. Many such cases!”), magnifying the reach of the conspiracy. Sadly, the online viral contagion had a real effect on the most vulnerable in our society. Studies found that parents opting to leave their children unvaccinated had increased by a factor of four. This wasn’t merely a public-health crisis – it was also an informational one.
It also provided a glimpse of a much larger problem. Thanks to the rapid and near-universal adoption of social media, the world has entered an age of online conflict with real-world results. News, politics and war are increasingly being shaped not merely by the hacking of networks (known as “cyberwar”) but the hacking of people on the networks, by driving ideas viral online via a mix of “likes” and lies (what we call “LikeWar”).
Every moment of every day, thousands of self-interested groups seek to steer online opinion on platforms such as Twitter, Facebook and Instagram toward their goals. Making matters worse, they have discovered that these platforms that we all now use reward not veracity, but virality. Just as conspiracy theorists invent falsehoods about vaccines, so do political partisans promote absurd rumors about their opponents and bigots spread deceit about entire religions. And it works, so more and more copy the same tactics and have literally changed the world. These misinformation campaigns have not just helped spur the re-emergence of a “cured” disease, but also been a crucial part of the story of the election and policies of Donald Trump, the success of the Brexit campaign and the genocide waged against hundreds of thousands of Mynamar’s Rohingya Muslims. The lies may originate in the digital world, but their effects no longer stay there.
Why are today’s internet users so vulnerable to viral falsehoods – and so willing to act on them? In short, because the size and speed of the modern information environment has outpaced humanity’s ability to process it. A salacious, false story spreads about 10 times faster than a real one. Attempts to “fact check” these headlines are almost universally ignored.
As with real disease, age is no defence. Even those who grow up in the online world are susceptible. More than half of U.S. middle schoolers – who spend an average of 7.5 hours online each day outside of school hours – cannot discern advertisements from legitimate news, nor distinguish basic fact from fiction. “If it’s going viral, it must be true,” one middle-schooler patiently explained to a team of Stanford researchers. Not a shadow of doubt was in her voice. “My friends wouldn’t post something that’s not true.”
But the rest of us are hardly off the hook. The reason has to do with our human identity as fundamentally social creatures. In studies across numerous cases and countries and involving millions of internet users, researchers have found a cardinal rule that explains how information spreads online. The best predictor is not truthfulness or even content: It is the number of friends who share it first. If people you know and trust share something, you are much more likely to share it yourself – and to believe what it says. In other words, our own networks put us at risk.
If the dangers of viral misinformation can be likened to a public-health crisis, the solutions we need to put in place may be similar, too. For civil society, the most important measure is an investment in digital-literacy programs on par with investment in health-education programs over the past century, designed to inoculate a society against viral outbreaks. The anti-vaxxers are only an illustration of larger problems. Given the use of the very same tactics of disinformation by Russian campaigns to target NATO democracies, as well as by domestic extremist groups, these literacy campaigns have a national security consequence as well. Indeed, many NATO states lying along Russia’s borders have already implemented them, while by contrast, the two member countries across the ocean have not. We may think ourselves safe from foreign threats, but we are now connected and at risk in a way unlike during Cold and World Wars.
The most effective of these initiatives don’t simply warn people about general misinformation (e.g. “Don’t believe what you read on the internet”) or pound counterarguments into their heads (e.g. “Here are the 10 reasons why global-warming deniers are wrong”). Rather, effective information-literacy education works by presenting students with specific, proven instances of misinformation, encouraging them to dissect the fallacious data and arguments on their own. In time, these sorts of programs can go a step further, thrusting students into the roles of fake-news tycoons and teaching them how to manufacture “facts” and stoke outrage. As people learn to weave falsehoods, they also become better at dissecting the falsehoods spun by others.
Just as in public health, these education programs must not be only at our schools, but be joined by a broader, whole-of-society effort to inoculate vulnerable citizens against harmful misinformation. A number of countries have created everything from public-awareness campaigns to an emergency-alert system, akin to warnings of dangerous storms or disease outbreaks, intended to slow the spread of such falsehoods before they can do too much damage. We also need the platforms to pitch in more, aiding in creating firebreaks to misinformation spreads and “deplatforming,” booting off the networks those who knowingly spread lies intended to harm society. You have a right to free speech. You do not have a right to use a private company’s network to deliberately target its other customers.
Yet none of these efforts can begin in earnest until a first, most basic step is taken. We must recognize that online misinformation is no longer simply an annoyance or a distraction but a palpable harm, as real as the pathogens that still roam the world. And just as societies once came together to defeat the worst of these diseases, so must we come together again to combat the threat of viral falsehood.