Taylor Owen is the Beaverbrook Chair in Media, Ethics and Communication and associate professor in the Max Bell School of Public Policy at McGill University.
It has been a game-shifting 2018 for Big Tech. It was the year that long-simmering concerns about its potential negative effects on our economy, on our personal lives and even on our democracy broke into public debate.
It was the year that much of the media got serious about tech journalism, when the balance of tech journalism tipped from gadget reviews and chief-executive profiles to treating Silicon Valley as a node of power in society to be held accountable.
It was the year that tech company employees began holding their employers to account. At Google, walkouts were staged over gender policy, and petitions demanded an end to Chinese expansion plans and to the development of “warfare technology.” Protests were held over Microsoft contracts with the U.S. Immigration and Customs Enforcement agency. There was backlash against the use of facial recognition to assist law enforcement at Amazon. And at Facebook, employees started speaking far more openly to the media as the company careened from scandal to scandal.
It was the year that tech executives awoke to their new operating environment. The U.S. Congress and parliaments around the world ordered CEOs, accustomed to being adored as the leaders of venerated companies, to testify and answer tough questions. It was also the year that these same CEOs, whether motivated by sincere interest in fixing structural problems with their products or concern for protecting public image and shareholder value, began making meaningful reforms to their companies, to varying degrees of success.
Finally, and perhaps most consequentially, 2018 was the year that tech companies lost the benefit of the doubt from governments. This was a result of a growing body of investigations, academic research and enterprise reporting detailing the ways in which social platforms have been used to undermine democracy. It also stems from a concern that the economic benefits of the digital economy are flowing predominantly to a small handful of U.S.-based global companies. But the final straw for many legislators was a November article in The New York Times revealing a disconnect between Facebook’s public statements about abuses on their platform and the aggressive tactics being used by executives to fight the story. To many in government, this confirmed that the tech-sector giants should be treated like any other large multinational corporation, and that it’s time to get serious about governing Big Tech.
Luckily, there are some relatively easy places for governments to start. They can bring sunlight to the world of micro-targeted advertising through new transparency laws. They can overhaul data-privacy regimes that are limited in scope, weak in capacity and unco-ordinated globally. They can mandate the identification of automated accounts so that citizens know when they are engaging with a machine or a human. They can modernize tax and competition policy for the digital economy. And they can fund large-scale digital literacy initiatives for citizens of all ages.
But beyond these short-term Band-Aids, 2019 must also be the year we start grappling with a set of thornier questions at the intersection of technology and democracy.
Democratic governments will need to wrestle with how their speech laws apply to the digital world. This is going to require bringing together the private sector and civil society in a hard discussion about the nature and limits of free speech, about who is censored online and how, about responsibilities for moderating speech at scale, and about universal versus national speech norms.
And while the idea that platform companies are simply intermediaries – and therefore not liable for how their services are used – has been foundational to the innovation, growth and empowerment created by the open internet, the sheer breadth of the economic and social services now provided by platforms might demand a more nuanced approach to how they are governed. If this comes at the cost of that innovation, democracies must be allowed to decide about the trade-off.
Such democracies will need to start co-ordinating their public-policy efforts around emerging technologies, too. There is currently a disconnect between the global scale, operation and social impact of technology companies and the national jurisdiction of most countries' tech laws and regulations. As former BlackBerry co-CEO Jim Balsillie has argued, the digital economy may need its Bretton Woods moment.
How we handle these challenges will set the tone for how we’ll grapple with the even knottier ones that are to come. As de facto public places increasingly involve private interests – such as Alphabet’s planned smart city in Toronto, or Amazon’s competition over which city would earn the right to be home to its HQ2 headquarters – governments will need to lead a conversation about what this collision looks like. What, for instance, would it mean to treat the data created by the citizens of cities as a public good?
And while governments devote substantial resources to growing the business of artificial intelligence, which promises to reshape broad aspects of our lives, we must work ahead to ensure these nodes of decision-making power are brought into the norms of accountability and transparency that we demand in democracies.
This year was defined by outrage against tech – but 2019 will be the year that the long and messy process of governing it begins.