Skip to main content
opinion

Taylor Owen is the Beaverbrook Chair in Media, Ethics and Communication in the Max Bell School of Public Policy at McGill University and a senior fellow at the Centre for International Governance Innovation and fellow at the Public Policy Forum.

Speaking to a technology conference in Paris last week, Prime Minister Justin Trudeau – a leader who has long championed the political and economic benefits of digital technology – channelled our cultural moment of tech backlash.

“What we’re seeing now is a digital sphere that’s turned into the Wild West,” he argued. “And it’s because we – as governments and as industry leaders – haven’t made it a real priority.”

This change in tone came the day after he signed the Christchurch Call – an effort led by New Zealand Prime Minister Jacinda Ardern and French President Emmanuel Macron to curb the problem of viral hate speech and violent content online in the wake of a massacre that was livestreamed and distributed on platforms such as Facebook and YouTube.

But while it was a helpful rallying call, the Christchurch compact was also ultimately a missed opportunity. It has no enforceable mandates, it focuses overwhelmingly on technical fixes to what are also political, social and economic problems and its framing around terrorism and hate speech is far too narrow, treating the symptom of the problem while ignoring the underlying disease. We don’t need to militarize the problem or play Whac-A-Mole with extremists: We need to govern platforms. The Christchurch Call won’t accomplish that.

In its wake, this week, the International Grand Committee on Disinformation and Fake News, a group of parliamentarians from 14 countries, continues its work with a second set of hearings in Ottawa. The work of the committee (of which I will serve as a witness) has become a catalyst for a community of scholars, policy-makers and technologists who believe that a broader conversation about tech governance – one that squarely addresses problems embedded in the design of digital platforms themselves – is long overdue.

These problems include the financial model of what Harvard professor Shoshana Zuboff calls “surveillance capitalism,” by which vast stores of data about our lives are used to target content designed to change our behaviour. The problems of surveillance capitalism include the way platforms manage their vast scale using opaque, commercially driven and poorly understood algorithmic systems and the market dominance of a small number of global companies with rapidly growing purchase on our social, political and economic lives.

While governments have been slow to take on the challenge of governing big tech, those that turned attention to this policy space in a serious way are coming to markedly similar conclusions. In short, that there are no silver bullets to the social and economic costs caused by the platform economy.

Instead, governments in France, Germany, Britain, the European Union, New Zealand, Australia, Canada and even a growing number of political leaders in the United States are articulating a need for a broad and comprehensive platform-governance agenda that is both nuanced to account for domestic differences in areas such as free-speech laws, and internationally coordinated to create sufficient market pressure.

The contours of this agenda are taking shape through three policy frameworks. The first: content policies. Democratic governments need to decide whether their speech laws require updating for the digital world and how they will be enforced. At the moment, we have delegated this regulatory role to the platforms, who hire thousands of moderators to enforce their terms of service agreements. Democratizing this system will involve difficult decisions around liability (who should be liable for speech online, the individual who spoke, or the company that amplified, and profited off, the speech?), moderation (who is responsible for implementation, the platforms who host and filter content, or the governments that are ultimately democratically accountable?) and transparency (how can we bring daylight to the secretive art of microtargeting, by which advertisers target and effectively influence narrow bands of people using extremely precise data?). Early experiments in content policy by Germany and France are yielding evidence of what works and what doesn’t, examples upon which other countries can iterate.

Second: data policies. If we believe in the premise that society should be able to leverage public data for the public good, citizens should have far greater rights over the use, mobility and monetization of their data, and regulation must be matched with meaningful enforcement. Even the reported FTC fine of $5-billion to Facebook was seen as inconsequential by the markets. The EU General Data Protection Regulation provided an example for such a package that is being tinkered with in other jurisdictions, including in California, where California Consumer Privacy Act is poised to push Silicon Valley directly from its home state.

Third: competition policies. The EU and Britain have begun to explore new ways to curb the power of digital giants, and several of the U.S. Democratic presidential candidates have come out in favour of pursuing antitrust regulation. Such efforts could also include restrictions or rollbacks on how services and platforms are acquired and developed, as well as antitrust oversight that accounts for more than price increases in judging a company’s market power, but also how much data it controls, whether it constrains innovation and whether it threatens consumer welfare.

On Monday, Innovation Minister Navdeep Bains built on Mr. Trudeau’s speech, laying out the 10 principles of a proposed Digital Charter. It’s a signal that Ottawa might finally be ready to take a broader view of its responsibilities. But whether this charter can be more than a collection of digital initiatives and instead become a co-ordinated policy agenda, implemented with the urgency that the problem demands, remains yet to be seen.