Is the rise of a largely unsecured Internet, of big data and of instantaneous mass communication a threat to our privacy from governments and corporations, or an opportunity for citizens? At the same time as Western activists have exposed and denounced the government use of online communications to invade our privacy, activists in the developing world have seized these same techniques and technologies and turned them against authoritarian governments, as a liberating tool.
How can we get the upper hand on big data and the surveillance state, using policies, campaigns and politics, to turn information technology back into an instrument of democracy?
Doug Saunders: A decade ago, talk about online communication and the wired life was all about its democratizing potential: it would open up government, allow social movements to organize, free people from a monolithic media and provide many opportunities for formerly marginalized voices to be heard. Now we talk about threats posed by that same online life: Spy agencies use it to perform mass surveillance on many if not all civilians. Corporations buy vast tranches of our data and use them in privacy-compromising ways. And authoritarian regimes use it to censor, monitor and persecute dissent. Is the internet now being used against us?
Jillian C. York: The Internet is absolutely being used against us, but on balance, I wouldn't say that it's done more harm than good. Rather, its benefits are in a tradeoff with its negative repercussions. For every government the Internet has helped to open -- Tunisia is an obvious example, but even the revelations about the National Security Agency’s surveillance programs wouldn't have been so simple to bring to public attention without the privacy-enhancing technologies employed by Edward Snowden -- there's another government that has used the Internet to further repression (Bahrain, Turkey, China, Iran all come to mind).
Viktor Mayer-Schönberger: The Internet is a powerful tool. As such it is neither good, nor bad, nor is it neutral (Krantzberg’s law). The Internet’s specific qualities make its use (and abuse) easier in some contexts and harder in others.
Initially, governments were just inept at understanding the use of the Internet’s specific qualities to further their interests. Now as governments around the world become better versed in using the Internet to their advantage, we see more and more governments actively using the Internet as tool to further their ends, rather than leaving the Internet to the people as they largely had done before.
That’s normal, even if we may find it at times deplorable. The real question is: will others, including civil-society organizations, remain a step ahead because of the very nature of the Internet, or not?
Zeynep Tufekci: Just like all major technologies before it, the Internet reconfigures power relationships in complex ways. As Viktor aptly observes, governments and other powerful actors, some of which were initially taken by surprise by the power of these new tools, have become increasingly adept at using it for their purposes. That has been the trajectory of previous technological revolutions as well: the powerful have more resources and institutional infrastructure to figure out and influence new developments. However, there is a twist to this story: The Internet, with some obvious inflections by geography, has been adopted by billions of people in a very rapid manner. Its rate of diffusion is unprecedented. As such it will be an uphill battle to alter its current features, which contain elements that can be considered positive or negative for ordinary people. However, one should note that even some of the features that are often considered positive, such as the ability to freely find like-minded people, can have surprising and negative impacts on the public sphere as well because it’s not just dissidents in repressive regimes who can seek each other out (while, of course, also being surveilled), it’s also, for example, white supremacists in Europe. The new complexities of the public sphere are beyond an easy classification of good and bad.
Outside North America and Western Europe, the narrative around big data and internet privacy is very different. In China and Iran and elsewhere, it is used to restrict citizen participation and communication in dramatic ways, by policing and cutting off social and other media. On the other hand, in dozens of countries these same technologies have become the principal means of resistance to government abuses -- witness the protester revelations of the Erdogan family's excesses in Turkey this week.
In her essay "What tear gas taught me about Twitter and the NSA," Dr. Tufecki argues that the narrative is dramatically different in non-Western countries, and that “Orwellian” metaphors are misleading and exaggerated, and are obscuring the more important developments taking place outside the West. Is there another story to be told about citizens and big data?
I think your portrayal is simplistic and fails to capture the complexity of the situation. For starters, few of the regimes that you mention actually utilize big data when they clamp down on freedom of expression. Rather, they use plain old small-data surveillance and intimidation. So the very use of big data in this context is unhelpful. This does not mean that big data does not present us with severe dark sides (it does!), it only means that these are quite different from what you suggest.
At the same time, there are cases of individuals and small groups of open-data enthusiasts combining the access provided by open data-initiatives, and the power afforded by big data, to create valuable apps that offer novel insights. For instance: which flight on a given day and time will likely be delayed by how much, where geographically speaking doctors prescribe expensive brand statins when generics can do the job, or what are actual commuter patterns urban planners failed to predict based on the small samples of data they had.
I am not suggesting that these and many other actual and potential uses of big data (including identifying flu trends, adverse drug reactions, and the like) are outweighing big data’s potential dark sides, only that uses of big data that benefit society do exist (unsurprisingly).
Doug Saunders: When we talk about information security and surveillance in terms of threats rather than opportunities, are we in danger of creating a paralyzing mood of despair that disempowers people? As Jillian C. York wrote recently, protecting yourself from online surveillance is nowhere near as difficult as people think. Viktor Mayer-Schönberger and his co-author observe that the old defence of default anonymity is no longer possible – and he has noted that the “right to be forgotten” is worth fighting for – but suggest that fairly simple regulations could shift big data’s balance from peril to promise. Is it time to change the conversation around privacy and information technology? Should we start talking about opportunities for citizens to seize control rather than threats from which they should hide in fear?
Jillian C. York: I do believe that it's time we start talking about opportunities for citizens to seize control. The way I see it, we have three options for fighting back against surveillance: We can do it through policy and legal processes, we can take the long game of changing our culture, or -- most immediately -- we can take personal responsibility to understand the threats we face and learn how to protect ourselves against them using technology.
Still, the individual response must also be collective. Mass surveillance doesn't merely seek to collect individual data, it seeks to collect data, and metadata, on entire networks. The metadata collected about an individual can tell you quite a bit -- such as who he or she has contacted and when -- but when you begin to connect the dots between individuals, that's where the chill sets in: it's not only about privacy, but also freedom of association and freedom of expression.
Viktor Mayer-Schönberger: It is unfair to expect individuals to protect themselves when they do not have the tools available to do so (and not enough transparency to inform choice). But it is also disingenuous to let individuals fend for themselves in an ever more complex environment of data (re)uses.
We don’t expect individuals to bring their chemistry kits to do their own lab analysis on food safety either. Rather we rely on (hopefully) effective regulation and enforcement through experts at government food-safety agencies. Just as food safety is too complex for individuals to do themselves, so is to decide at the time of collection whether and for what purposes one permits others to use personal data. We need to hold data processors accountable for how they use personal data, and we need improved effective regulatory enforcement. Only then can we hope to re-establish some of the lost trust people had in sharing data, and learning from it.
Zeynep Tufekci: I agree that people should take reasonable precautions, but there is no truly individual solution to the structural challenges arising from surveillance we face as a society as a whole. The “threat model” approach is fine at the individual level but it cannot guard us against the aggregate impact of small, reasonable choices made by people, most of whom will correctly or incorrectly perceive little to no threat against them. The result is that the central powers end up with a lot of information, collected in small pieces at a time.
At the individual level, people reasonably trade a little bit of convenience for a little bit of privacy, or decide to ignore the costs altogether (even though constantly thinking about threats is a cost as well). Many may never face an individual negative repercussion. However, the result of these small individual choices is an aggregate situation in which the centralized powers -- governments and internet corporations -- have a lot of information, gathered little by little, on the whole society.
This itself is a threat only at the aggregate level. In other words, the problem isn't that the centralized power has little bit of information on Alice or Bob or Eve, but that it has a little bit of information on Alice and Bob and Eve and everyone else. This creates a tragedy-of-the-commons situation where small individual choices which are reasonable create an overall worse environment for everyone involved. Such aggregate data allows for different computational practices by the powerful which can be used to discriminate, manipulate or exploit.
That's why this cannot only be left on people's individual shoulders: we need society-wide policy level discussions and intervention.