Skip to main content

Audrey Kurth Cronin is professor of international security at American University and director of the Center for Security, Innovation, and New Technology. Her book, Power to the People: How Open Technological Innovation is Arming Tomorrow’s Terrorists, is a finalist for the Lionel Gelber Prize, which will be awarded on March 10.

Open this photo in gallery:

Germany's Chancellor Angela Merkel (R) stands next to relatives of a victim of the shootings in Hanau at the memorial service for the victims of the shootings in Hanau, Germany, March 4, 2020.KAI PFAFFENBACH/AFP/Getty Images

In February, 43-year-old Tobias Rathjen killed 10 people in a mass shooting in Hanau, Germany. It was an attack that demonstrated how online platforms join up neo-Nazis, incels (“involuntary celibates”), racists, xenophobes and conspiracy theorists into a global movement that appeals to weak-minded individuals. Mr. Rathjen left behind paranoid texts, a website, and an English-language YouTube video espousing white supremacism, calling for genocide and claiming secret mind-readers were controlling him.

It’s evidence that the same digital technologies driving the global economy to new heights are creating new threats in the form of deadly popular empowerment. We are experiencing a rare combination of open technology, expanded means of communication and the global spread of political violence – a trifecta that most recently occurred more than 100 years ago. While they may seem unrelated, mass shootings, knife attacks and vehicle assaults are powered by new communications links between individuals and audiences. Perpetrators can broadcast their violence directly online and inspire copycats. And today’s technology-driven political violence is happening regularly around the world.

To comprehend the influences that are shaping and driving this violence, we must take a broader historical perspective. These are early signs of the maturing of an open technological revolution that began in the 1990s, and experience shows that there are specific things we can do to alter its future trajectory.

Individuals now have access to digital tools that can put any cause or grievance on steroids, an important contrast to our recent history. During the 20th century, key lethal technology was mainly controlled by states. Protected by security clearances, hidden in government laboratories and funded by hefty state R&D expenditure, military innovation produced large-scale, high-tech weapons such as stealth bombers, Aegis cruisers or intercontinental ballistic missiles – complex systems that were expensive, rare and inaccessible to the public. The Kalashnikov assault rifle was an exception, openly shared and spread by the USSR, which did not believe in patents, and as a result, some 70 million to 100 million Kalashnikovs still haunt us today. But the 20th century’s iconic image was J. Robert Oppenheimer working away in a government lab with a small team of scientists, developing the nuclear bomb – a sophisticated, state-controlled, very secret and totally inaccessible innovation.

Today’s model inventor is more like Alfred Nobel tinkering with nitroglycerine in his family’s backyard shed during the 1860s, ultimately inventing dynamite. We are living through a period of open technological innovation similar to the maturing of the industrial revolution at the end of the 19th century. In that period, individual scientists and hobbyists experimented in workshops, garages, or attics. Such tinkering yielded high explosives (1867), the motorcycle (1885), the automobile (1886), the radio (1895) and the airplane (1903), none of them invented by the military yet all critical to the two world wars that followed. Periods of accessible technological innovation reshape societies from the bottom up, but, like the top-down impact of nuclear weapons, they can also reorder the world.

Today, in similar ways, individuals and small groups can experiment with technologies that are cheap, accessible, transportable, concealable and simple to use. The paradigm for the violence unfolding around us does not privilege one doctrine or ideology – like Islamism or fascism or authoritarianism – any more than the radio or the airplane privileged 19th-century monarchies or republics. We must look beyond the motivations that drive today’s chaotic violence and focus instead on how clusters of digital technologies are being combined and used in unprecedented ways.

Through that larger historical lens, big global patterns emerge.

First, and most obvious, digital technologies are changing how people mobilize, both for good and bad causes. We saw this most clearly with the dramatic growth of the Islamic State in 2014, but weapons training, long-range indoctrination and the recruitment of foreign fighters are not in themselves new. In the 19th century, anarchists published pamphlets with detailed instructions for how to make, acquire and target dynamite attacks, killing thousands of people on nearly every continent as part of the first global wave of modern terrorism.

What’s different today is that most people carry a powerful computer in their pocket, which can reach anyone, anywhere, and which is designed to be addictive. It’s a more refined tool of social, physical and psychological manipulation than any sensationalist 19th-century newspaper article or anarchist pamphlet could ever be.

As a result, digital technology is reshuffling power in unexpected ways. Hong Kong protesters use Telegram, Apple’s AirDrop feature and crowd-funded advertising to sustain their uprising, even as China employs information operations via Facebook and Twitter to frame the protesters as criminals and stooges of Western influence. A combination of high-speed connectivity, smartphones, WhatsApp, Facebook Live, Twitter and a range of other digital means outstrips our ability to control or even keep up with the shifting global and local dynamics. China’s ability to control internet access and use ubiquitous facial-recognition technology is not the end of the story.

And it’s not just the availability of messages that makes mobilization more potent. Online anonymity does not exist. Using the vast amount of personal data now accessible, it is simpler to locate and groom individuals, by using Facebook profiles, data from major hacks or even online quizzes. Any social-media user can be personally targeted, from offensive U.S. cyber operations against the Islamic State’s caliphate in 2015 to Cambridge Analytica’s harvesting of Facebook data to sway individual Americans during the 2016 electoral campaign.

Which brings us to the second big trend: greater reach, or the increased ability for everyone to project power. Today’s digital platforms are designed to facilitate experimentation. Most of the hard work has been done. Individuals can operate a robot or quadcopter, for example, without knowing how to build a smartphone. They download an app and off they go.

Clusters of technologies enable amateurs to combine old and new capabilities to create something novel. Quadcopters carrying heavy cameras to film your wedding can also tote explosives. GoPro cameras combined with Twitch and an AR-15 create an instant, globally distributed personal action film where a mass murderer plays a “hero.” Criminals, terrorists and insurgents can’t go toe to toe with conventional militaries, but they don’t need to: The element of surprise and a global “battlefield” are giving them an edge.

Meanwhile, we’re exponentially increasing our cyber vulnerability. Machine-to-machine connections in agriculture, water-purification, electrical grids, or industrial settings make our societies utterly reliant on connectivity. The Internet of Things, a phrase referring to products with internet access and sensors, is also the Internet of Threat. Millions of proliferating internet-connected devices such as door locks, kitchen appliances, thermostats, voice-activated assistants, sleep-monitoring systems and hospital heart monitors are equipped with sensors (and usually microphones) that directly receive and transmit data without human involvement. They are also easily hackable. How often do you update the software on your router? How about your television? In many sectors, you can’t even buy products without internet connectivity now, because the data that companies collect on you (then analyze, combine, reshuffle and sell to data brokers) may be most valuable of all.

How do we handle all that data? That brings us to the third big trend: systems integration. Our devices are so sophisticated, fast, data-rich and advanced that human intelligence cannot manage them all. The answer is to build in degrees of autonomy – meaning a machine is designed to sense the environment, process what is happening, and act without direct human involvement.

Machine learning is a common element of many systems. Recognizing patterns and inferences in large databases can help build degrees of artificial intelligence (AI). Artificial intelligence may help us predict forest fires or prevent pandemics, but unfettered AI could also select and target individuals or even categories of human beings.

Like its precursors, military AI is a double-edged sword. Will the future be about better, more discriminate targeting, beyond what faulty human operators can do? Or a dystopian world of killer robots that evade human control? Either is possible. And building strong ethical guidelines for state militaries is just the beginning. Simple forms of autonomy and artificial intelligence will become cheaper, more potentially lethal and more accessible.

So, to sum up, the diffusion of new technologies is democratizing the ability to do three things that until very recently only armies of advanced nation-states could do: mobilize large numbers of people, project power globally and integrate complex systems. As a result, individuals and small groups have acquired greater lethal power to experiment, innovate and kill.

Our era of open technology is empowering a growing range of threats. Some are unaffiliated (e.g., mass shooters, slashers, vehicle attackers). Others are terrorists (right-wing, left-wing, jihadi, nationalist) or hackers (black hatters, cyber criminals, mercenaries, virus writers, extortionists). They may be members of organized crime syndicates, private armies, or state proxies. With evolving technological means, categories of nefarious actors are increasingly difficult to differentiate. They evade existing legal and regulatory frameworks, forcing democracies to find a new path between authoritarianism and anarchy.

On top of that, autocratic states such as Russia or Iran can manipulate individuals for their own nefarious purposes. They can use online proxies to heighten the culture war, target narcissists or psychopaths, provoke anger and paranoia, and incite violence remotely. Why meet us on a battlefield? It’s easier to destabilize democracies from within. Adversaries can spur us to annihilate each other where we shop, play, learn and worship.

So, what can we do? Fortunately, a lot. Individuals, private companies and democratic governments have many promising courses of action to mitigate the risks and maximize the benefits of new technologies.

Citizens should demand control of their own data and the ability to choose whether they want internet-connected products or not. A product’s core purpose – say, a refrigerator chilling food or a car driving on the road – should continue to function independently of whether or not it is internet-connected. When companies sell products with shoddy cybersecurity – such as hackable door locks, smart light bulbs, or electric scooters – consumers should be able to sue for damages. And citizens everywhere should push for stronger privacy legislation. Those living in jurisdictions that are currently leading the pack – such as Europe, where the General Data Protection Regulation is in force, or California, with its Consumer Privacy Act – must take full advantage of their rights. Privacy and security are no longer in opposition: Now, privacy is security. We must lock nefarious actors out of our lives.

We need a new regulatory model for social-media companies. In particular, private tech companies should take responsibility for all of the activity on their platforms. In the wake of the March, 2019, Christchurch tragedy, New Zealand Prime Minister Jacinda Ardern convened a Call to Action summit in Paris to bring governments and tech companies together in eliminating terrorist use of social media. Companies have since instituted better measures to remove extremist content from platforms such as WhatsApp and YouTube, and that’s a good start. But we’re still nibbling around the edges of a problem that demands a culture shift. In the 19th century, the first wave of modern terrorism was sensationalized by mass-market newspapers that built enormous empires, such as Pulitzer and Hearst. Top-quality newspapers eventually developed higher editorial standards, and so must companies such as Facebook and Twitter.

Finally, democratic governments must stop blaming the victim and get serious about cybersecurity. Urging people to practise better “cyber hygiene” is like blaming victims of the coronavirus for not washing their hands. The digital space is integral to our infrastructure; cybersecurity is national security. Government legislatures must work more closely with private tech firms to develop smart regulations, such as pushing automatic software updates to everyone, and sharing known vulnerabilities so they can be patched for everyone’s benefit.

Guidelines for the collection, control and ownership of data are also urgently needed. Governments could, for example, put in place the kinds of ethical review boards that universities use to protect the rights and welfare of human subjects. What riskier experiment could there be than building and exploiting massive human databases? We must strive to protect those data sets as part of our human heritage, to be managed in the public interest – not least because artificial intelligence is fuelled by data.

Above all, we need to move faster and be more aware of the implications of our open technological revolution in its full historical context. The earlier we get working on solutions, the better we’ll be able to realize the promise of our brilliant technologies and limit their peril.

Keep your Opinions sharp and informed. Get the Opinion newsletter. Sign up today.

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe