Taylor Owen is an assistant professor of Digital Media and Global Affairs at UBC and a fellow at the Public Policy Forum.
The unfolding drama surrounding Silicon Valley and the 2016 U.S. presidential election has brought much needed attention to the role that technology plays in democracies. On Thursday, Facebook announced the Canadian Election Integrity Initiative, the very premise of which invites the question: Does Facebook threaten the integrity of Canadian democracy?
It is increasingly apparent that the answer is yes.
Facebook's product is the thousands of data points they capture from each of their users, and their customers are anyone who wants to buy access to these profiles. This model is immensely profitable. The company's annual revenue, nearly all of which comes from paid content, has more than tripled in the past four years to $27.6-billion (U.S.) in 2016. But the Facebook model has also incentivized the spread of low-quality clickbait over high-quality information, enabled a race to the bottom for monetized consumer surveillance, and created an attention marketplace where anyone, including foreign actors, companies or political campaigns, can purchase an audience.
A key feature of the platform is that each user sees a personalized news feed chosen for them by Facebook. This filtering is done through a series of algorithms, which, when combined with detailed personal data, allow ads to be delivered to highly specific audiences. This microtargeting enables buyers to define audiences in racist, bigoted and otherwise highly discriminatory ways, some of questionable legal status and others merely lacking any relation to moral decency.
The Facebook system is also a potent political weapon. It is increasingly clear that Russia leveraged Facebook to purchase hundreds of millions of views of content designed to foment divisions in American society around issues of race, immigration and even fracking. And it's of course not just foreign actors using Facebook to foster hate. Just this week, Bloomberg reported that in the final weeks of the U.S. election, Facebook and Google employees collaborated with extreme activist groups to help them microtarget divisive ads to swing-state voters.
Even without this targeting, content regularly goes viral regardless of its quality or veracity, disorienting and misleading huge audiences. A recent fake video showing the impact of Hurricane Irma was viewed 25 million times and shared 855,000 times (it is still up).
And here's the rub: when Facebook hooks up foreign agitators and microtargeted U.S. voters, or amplifies neo-Nazis using the platform to plan and organize the Charlottesville rally, or offers "How to burn jews" as an automatically-generated ad purchasing group, it is actually working as designed. It is this definition of "working" and this design for which Facebook needs to be held publicly accountable.
Some jurisdictions are starting to force this accountability. Germany recently passed a law that would fine Facebook €50,000 ($75,000) for failing to remove hate speech within 24 hours. Britain has proposed treating Facebook like any other media company. The EU is implementing new data privacy laws and is raising anti-trust questions. A U.S. Congressional committee is questioning Facebook, Google and Twitter officials on Russia, with lawmakers likely to impose new online election advertising and disclosure regulations.
Oddly, these policy debates are largely absent in Canada. Instead, Facebook is intertwined in the workings of governments, the development of public policies and the campaigns of political parties. Recent policy decisions have seen the company remain largely untaxed and called on to help solve the journalism problem for which it is the leading cause.
Thursday's announcement further illustrates the dilemma of this laissez-faire approach. How exactly should the Canadian government protect the integrity of the next federal election, in which interest groups, corporations, foreign actors and political campaigns may all run hundreds of thousands, or millions, of simultaneous microtargeted ads a day?
It could force complete transparency of all paid content of any kind shown to Canadians during the election period, as with other media. It could demand disclosure of all financial, location and targeting data connected to this paid content. It could place significant fines on the failure to quickly remove misinformation and hate speech. It could ensure that independent researchers have access to the platform's data, rather than merely relying on Facebook's good intentions. Political parties and the government could even model good behaviour themselves by ceasing to spend millions of dollars of our money on Facebook's microtargeted ads.
None of these options are likely to be adopted voluntarily or unilaterally by Facebook. We have governments to safeguard the public interest.
In fact, the modest voluntary efforts announced Thursday, which aim to put the focus on users through news literacy initiatives, and hackers through better security, ignore the key structural problem that has undermined elections around the world – the very business model of Facebook.
Efforts such as the Canadian Election Integrity Initiative represent a shift in the public position of Facebook that should, if it goes further, be welcomed. But it must also be viewed as the action of a private corporation that extracts increasing profits from a de facto public space.
We are heading into new and immensely challenging public policy terrain, but what is certain is that the easy and politically expedient relationship between Silicon Valley and government must come to an end.