Taylor Owen is the director of The Centre for Media, Technology and Democracy at McGill University and the host of the Big Tech podcast.
During a press call in March, 2020, Facebook CEO Mark Zuckerberg was asked whether he could ensure that the company’s increasing reliance on powerful artificial intelligence would be used responsibly. He replied, “We basically hold ourselves accountable.”
This telling response points to a critical question facing governments around the world. Do they have the power, capacity and will to govern the digital sphere, or will they cede this responsibility to a handful of global companies with growing control of our digital infrastructure?
It is increasingly clear that the many benefits of digital platforms such as Facebook, Amazon and Google come with significant economic, social and political costs. The erosion of privacy, the toxic effects of hate speech, the undermining of reliable information and the distorting effects of market concentration are all a function of financial incentives run amok in a largely self-governed internet.
While it is unambiguously the responsibility of the state to mitigate harms caused by market failures, the challenge is that these new technologies fit uncomfortably into our existing laws and regulations, which were built for the industrial rather than digital world. These companies are global, they are vertically integrated across multiple domains of government responsibilities, they evolve quickly and they have tremendous lobbying power. All of which make them hard to govern.
Increasingly, the companies themselves are stepping into this governance vacuum, acting less like nimble startups and more like nation states. Faced with the challenges of governing the 100 billion daily speech acts of its 2.2 billion users, Facebook has gone as far as to create what it calls the “Oversight Board,” a semi-independent committee that is mandated to review some if its more controversial take-down decisions, including banning Donald Trump’s account. This is a type of governance, but it is not democratically accountable in any meaningful sense.
The result is a collision course between companies acting like states (becoming what legal scholar Kate Klonick calls the New Governors) and democratically accountable governments trying to exert power in a domain they struggle to control.
In the past few years, however, governments around the world have begun to experiment with how to hold platforms accountable, resulting in a governance renaissance and, ultimately, a debate about the future of liberal democracy itself.
While it took them far too long to get serious, democratic countries are starting to use a set of data, content and competition policies to address the harms caused and the power exerted by Big Tech.
Because data and the algorithmic systems they fuel are the lifeblood of the platform economy, governments are developing new ways to protect privacy, render algorithmic decision-making more accountable and enforce new transparency rules. A wide range of harmful and even illegal activity and speech is enabled and incentivized online, so governments are developing new hate-speech laws, content-regulation bodies and take-down laws. And because old notions of consumer harm don’t apply to services that are free, governments have been thinking more broadly about market dominance, looking at tools such as limiting mergers and acquisitions, forcing companies to make their platforms and tools more interoperable with competitors, and imposing arbitration.
Enter the Australian government.
The law attracting global headlines this week – Australia’s news media bargaining code – is the result of a three-year investigation into the anti-competitive practices in the digital advertising market, driven by a concern that abuses of market power by Facebook and Google undermine the financial viability of journalism organizations. The new code addresses this bargaining power imbalance by requiring companies to enter into binding arbitration with publishers, and platforms to publicize their ad sales data and give notice to publishers of algorithmic changes that could have a significant effect on the distribution of content.
Although there are many reasonable critiques of the Australian approach, one thing is undeniable: It is democratically legitimate. Which is exactly what makes Facebook’s and Google’s responses so revealing.
Google threatened to leave Australia entirely, and then cut a side deal with News Corp.’s Rupert Murdoch (an imperfect spokesperson for reliable information if ever there was one). Facebook decided to block Australian news organizations from using its platform all together, although several days later, the company backtracked and promised to restore news content after the government agreed to minor amendments providing more time to cut deals with publishers. Setting aside the implications of blocking reliable information in the middle of a pandemic, this negotiation tactic demonstrates a breathtaking disdain for the democratic process.
The outcome of this showdown was a slightly watered down law that will almost certainly lead to a flood of ad hoc deal-making between publishers and platforms, exacerbating a range of negative incentives in the journalism industry. We are barrelling toward a world where most publishers will receive funding directly from the companies they should be holding accountable.
The message from platforms was perfectly clear: We will support journalism, but only on our terms. We will support regulations, but only those that align with our interests. These threats are ultimately a demonstration of power to other governments considering similar laws, many of which are starting to work together.
The core challenge facing governments such as Australia’s and Canada’s is that no single state has enough power (save perhaps the United States) to really take on these companies. What is needed is not a litany of national approaches, each of which can be fought and picked apart by local lobbyists, but a global one.
In areas of clear national jurisdiction and context, countries are learning from each other’s policy experiments. For example, new laws around hate speech have been passed around the world and are coming soon in Canada. Each uses a slightly different model, allowing governments to learn from each others’ experience. When the German government imposed steep fines of up to €50-million for failure to remove unlawful content and hate speech, platforms understandably responded by over-censoring. Other countries can now evolve the model.
There are also areas in which governments can simply better co-ordinate in order to replicate laws that have been successfully passed in other jurisdictions. For example, Britain now classifies gig workers as “workers” instead of contractors and has introduced a sweeping new code to protect kids online. France has redefined Airbnb as an “information society service,” and the EU has imposed new data-sharing and transparency requirements on platforms. All of these policy ideas open the door for other countries to immediately demand the same. If these policies work in one country, they can be adapted to work in others.
Finally, there are cases in which international collaboration will be needed in order to create enough market weight to shift the incentive for what are ultimately global powers. This is certainly the case with antitrust investigations, where countries are beginning to band together; in tax policy, where we need common rates to avoid the race to the bottom currently being won by Ireland; as well as in new data privacy laws, where it simply makes sense to have common standards across national borders.
There are, however, two challenges with this global approach. The first is that we lack suitable international institutions through which we can iterate, co-ordinate and collaborate digital policy agendas. There is an urgent need for a Bretton Woods moment to imagine this new international architecture.
The other challenge is China.
Much of the debate about Big Tech is based on two false assumptions: that only democracies want to regulate tech; and that there is only one set of global platforms to contend with.
The reality is that illiberal and autocratic regimes around the world are increasingly using state power to control online activity, sometimes using regulation, and in many cases turning to a separate set of tools entirely, ones made by China.
While Western Big Tech may enable a range of illiberal and anti-democratic behaviour, China has developed a separate set of companies and tools that explicitly do so. U.S. companies have used the data they collect and the power they yield over their users for commercial gain, but their Chinese counterparts are using a similar design to allow governments to control their citizens.
Using tools such as facial-recognition technology, AI-driven mass censorship, and social credit systems that use widespread surveillance data to rank the behaviour, value and rights of citizens, the Chinese model offers illiberal and autocratic regimes an attractive alternative to the Silicon Valley offering. And many are adopting this dystopian tech with enthusiasm.
All of this makes democratic governance of the open internet all the more important and urgent. We cannot let it drift in the same illiberal direction. Doing so is going to demand policy creativity, collaboration among democracies and the courage to stand up to the Big Tech lobby.
Which is why, ultimately, the governance struggle between Big Tech and the nation state is actually about the future of liberal democracy itself.
Keep your Opinions sharp and informed. Get the Opinion newsletter. Sign up today.