Skip to main content
opinion

Wendy H. Wong is a professor of political science, the Canada research chair in global governance and civil society and the research lead of the Schwartz Reisman Institute for Technology and Society at the University of Toronto.

The Privacy Commissioner of Canada recently said the RCMP violated the Privacy Act when they used services from Clearview AI, a facial-recognition technology company. In an earlier investigation, Daniel Therrien argued that Clearview AI, which scrapes the internet for faces, violates privacy laws. But what does that mean?

In a world of rapidly advancing digital technologies, it often seems as though things are changing hastily without much regard for the human consequences of datafication. Fortunately, a robust framework of international human rights exists to give us guidance.

A right is an entitlement, not a courtesy. International human rights articulate what needs to be minimally ensured to protect our autonomy and dignity. Research has demonstrated the potential of facial-recognition technology (FRT) to harm and discriminate against people, including the wrongful arrest of Black men. It therefore affects autonomy and dignity.

What’s at stake with emerging technologies is deeply personal. FRT takes something very important away from us: control over our own faces. More accurately, when our faces become data, a profound part of who we are floats freely. If we don’t come to terms with the “stickiness” of data – easily reproducible and shareable – our policies will miss how data fundamentally shifts our humanity, including our faces. Proclaiming something a “human-rights violation” is only the first step. We need to acknowledge how data alters our autonomy and dignity before we find a solution. We need to go back to the basics of what it means to protect human rights, even if it means changing some and creating new ones.

FRT isn’t exactly new. The earliest forays into the technology came out of Palo Alto in the 1960s. Today, Big Tech and startup companies alike are all developing FRT.

Critics charge that FRT doesn’t allow for consent and violates privacy, which can create chilling effects. It creates surveillance societies, damaging democratic systems. While true, these arguments don’t take it far enough. They don’t fully capture the scope of how FRT can change the way we live.

FRT does more than just violate our right to consent to activities that take away our privacy. It takes our faces and turns them into data. This data can then be copied, transferred and analyzed indefinitely. We may never know where it goes, who has it or what purposes it serves. Facial data is leveraged for all kinds of predictive technologies, including anti-stalker algorithms used at Taylor Swift concerts and helping you drive.

But can we ever properly consent to having our faces made into data? In the best of times, consent is a challenge to define. In the age of datafication, it has become almost impossible to take someone’s “consent” as meaningful. It’s hard for people to understand to what, exactly, they are consenting. There is not always a way to opt out of FRT or other digital services.

There are several lawsuits that accuse Clearview AI, which has been widely used by law-enforcement agencies, of violating privacy laws. Other companies have also complained that Clearview AI violates their websites’ terms and conditions when it scrapes photos. Clearview AI’s position is that people have made their images available to the public by posting them online and that the company’s actions fall under freedom of speech.

Viewed through a consent lens, neither side prevailing really settles the human-rights issue. If Clearview AI wins its lawsuits, it means that anything we share online is free for others to use, however they wish, because we have shed our claims to it by virtue of posting it, even if we intended to post it for specific purposes. However, if Clearview AI loses, it means our faces can only be used for “authorized” purposes, many of which are difficult to anticipate and could remain unknown to us; it would not resolve the question of what exactly happens to data from our faces.

Our faces are, in many ways, who we are to the world. They are the way people identify and remember us. When we digitize our faces, we become data – out there forever. In the most recent proposal over regulating the use of AI, the European Commission highlighted FRT as a “high risk” activity that would demand the most regulatory scrutiny. Clearview AI’s scraping of the internet also shows how linked data can be – uploaded for one purpose but used for another.

So who owns the data from our faces? We don’t have an answer to this yet. To start, reprioritizing values such as human autonomy, dignity and freedom – the very reasons for human rights – in a world of FRT will provide answers to the specific questions we have about “privacy” and “consent.” Data brings us a profoundly new way of living our lives. But it is also a record of ourselves that makes us who we are: our faces, but also our locations, thoughts and very heartbeats. It is a mistake not to consider how data and AI profoundly change human experience. Human rights give us a globally legitimate way to provide ethical guardrails for deciding policy and social norms in this brave new world.

Keep your Opinions sharp and informed. Get the Opinion newsletter. Sign up today.