Skip to main content
opinion

Taylor Owen is the Beaverbrook chair in media, ethics and communications and the director of the Centre for Media, Technology and Democracy at McGill University, as well as the host of the Big Tech podcast.

Anonymity has long been a core attribute of the internet, and encryption is the backbone technology that enables it. The ability to speak and act anonymously and share information online securely enables free speech and political activity, protects vulnerable and targeted groups, allows for personal development without the threat of persecution or shame and secures the digital economy. This anonymity is seen by many as an inherent right in the digital age. But is this right absolute? Recent revelations about the wide range of technologies being used to circumvent online privacy point to a shift in the debate.

There is a massive online trade in child sexual abuse material (CSAM). EUROPOL’s repository of CSAM contains more than 46 million unique images or videos. According to the global threat assessment report published by the We Protect Global Alliance – a network of governments, private companies and law enforcement specialists working to tackle CSAM – U.S. technology companies made 18.4 million referrals of CSAM to the National Center for Missing and Exploited Children (NCMEC) in 2018 alone. This illicit trade is enabled in part by encrypted communications; last year, a former Children’s Commissioner of England, Anne Longfield, warned that end-to-end encryption makes it harder to police child abuse and grooming online. In crime data compiled by Statistics Canada, rates of creation and distribution of CSAM increased 27 per cent during the pandemic. And citizens and regulators around the world are increasingly demanding that tech companies do more to counter this harm.

Last week Apple did just that. The company announced a set of tools that will enable it to detect images of child sexual abuse in communications that were previously protected through encryption. Apple will now be able to use machine learning to convert images – before they are uploaded to iCloud or sent via iMessage – into a string of characters that it can then compare against known images of exploitative content. If images of child sexual abuse are found, the NCMEC will be alerted.

While Apple has positioned itself as a company that prioritizes user privacy (unless, of course, the user is in China, where Apple has made broad concessions to the government), these new capabilities shift the balance between privacy and the need for greater security. And they have been deeply polarizing, pitting digital-rights activists against child-protection groups. What is clear is that this new capability highlights the tension between competing rights: the right to privacy and the right to be protected from abuse.


In 2018, researchers at the Citizen Lab in Toronto, led by Ron Deibert, found themselves with a unique window into the murder of journalist Jamal Khashoggi by the Saudi government. The lab had been tracking the use of sophisticated spyware developed by an Israeli company, the NSO Group. Once a phone is secretly infected with the NSO Group’s tool, all data on the phone can be collected, the location tracked and the microphone and camera activated. It is a near-complete invasion of privacy.

Researchers at the lab found an infected phone in Montreal that turned out to belong to a high-profile Saudi YouTuber named Omar Abdulaziz, who had been communicating with Mr. Khashoggi. The day after Citizen Lab published a report detailing Saudi use of this spyware to track dissidents, Mr. Khashoggi was reported missing. Days later, the world would learn that he had been killed inside the Saudi consulate in Istanbul and dismembered with a bone saw by assassins sent by the Saudi government.

Last month, the world learned much more about the NSO Group and its powerful surveillance tool, called Pegasus. An investigation by 16 international media organizations into data provided by Amnesty International and the French NGO Forbidden Stories, then analyzed by the Citizen Lab, showed the malware had been found on the phones of journalists, human-rights activists and politicians around the world.

Such hacking and spyware tools are of course not used solely by illiberal regimes. They are often bought and sold on open markets and used for policing and investigative purposes. But commercial surveillance tools are not the only way to attack the systems that secure our digital information.


In a remarkable new book, This Is How They Tell Me the World Ends, New York Times cybersecurity reporter Nicole Perlroth details the underground market for security vulnerabilities. Government agencies such as the National Security Agency (NSA) will pay millions of dollars to hackers to find security vulnerabilities in hardware and software. But they don’t buy information about these vulnerabilities – called “zero days” – so they can patch them. They buy such information so they can exploit it – to, for example, hack into our phones.

Illiberal regimes around the world also use zero days for a wide range of strategic purposes. China has long exploited them to steal industrial secrets – everything from the design of the F-35 stealth fighter to Google’s base code to the formulas for Coca-Cola and Benjamin Moore paint. Tehran uses zero days to monitor dissidents. The Saudi Arabian government uses them to track journalists, such as Mr. Khashoggi. And the North Koreans use them to deploy ransomware to raise money.

The challenge is that almost everyone on the planet uses the same suite of technologies. The same iPhone security hole that allows the NSA to snoop on terrorists can also be used by Saudi Arabia to threaten dissidents or by China to spy on the Uyghurs. Even when these vulnerabilities are exploited strategically – to disable Iranian nuclear reactors with malware, for instance – these hacking tools can undermine the broader tech infrastructure once they’re released into the wild.

In other words, in the world of cyberwarfare, an offensive advantage is also a glaring defensive vulnerability. The very tools we are using widely to combat cybersecurity risks leave us highly vulnerable to hacking from foreign states and malicious actors.

These stories point to key questions facing open societies as we navigate the future of our digital infrastructure: Who has a right to be anonymous? And whose information has a right to be secure?


Last year, during deliberations of the Canadian Commission on Democratic Expression, which I am co-chairing with former Supreme Court chief justice Beverley McLachlin, no issues came up more often than the right to anonymity online and weighing the benefits of anonymity against the harms that can be done with it. Should platforms enforce real name policies? Should police have access to the accounts of individuals they are investigating? Should platforms share data on illegal activity with the police? Should our governments participate in the hacking market in a manner that could blow back against us?

These questions are difficult because there is a clear tension between all the benefits afforded by anonymity and secure communication and the harms they facilitate. The right to speak anonymously is crucial for human-rights advocates, journalists, whistle-blowers and marginalized groups. The idea that online anonymity only encourages harassment of those groups is a fallacy; it has been widely shown that much of the abuse on Facebook and Twitter is actually committed by people using their real names. And government databases of “real names” may be used for surveillance or profiling, in both democracies and illiberal regimes. Yet serious crimes are conducted under the cover of anonymity.

These tensions are exacerbated by the wide spectrum of technologies in this space. Ransomware and zero-day exploits, spyware, digital forensic policing tools and broader surveillance tech are used and traded by democratic and illiberal governments alike. This presents a classic dual-use problem.

Take the Canadian company Netsweeper. While their internet filtering technology could in theory be used to help libraries block pornography or allow companies to restrict social-media use by employees, Citizen Lab research has shown how it is also used to block access to a “wide range of digital content protected by international legal frameworks, including religious content in Bahrain, political campaigns in the United Arab Emirates, and media websites in Yemen.” As legal scholars Siena Anstis and RJ Reid recently argued in a brilliant paper analyzing the governance challenge posed by Netsweeper, we urgently need to update our export rules, and they suggest a mandatory human-rights due diligence requirement for surveillance technology.

International precedents are fairly limited, but countries around the world are attempting to step into this space, including the Netherlands, Britain and the EU. Given that location is very difficult to identify in this domain, and many of these technology companies are fundamentally global in nature, we are likely to need new governance measures.

In the interim, the privacy rights of citizens present a feasible common denominator. The legal and regulatory ambiguities that allow companies and governments to monitor and surveil citizens are a function of our failure to update our privacy regimes for the digital age. This might be a first step to addressing a much wider – and more politically charged – range of challenges regarding who should and who should not be anonymous.

Keep your Opinions sharp and informed. Get the Opinion newsletter. Sign up today.

Your Globe

Build your personal news feed

Follow topics related to this article:

Check Following for new articles

Interact with The Globe