Keep your Opinions sharp and informed. Get the Opinion newsletter. Sign up today.
The King’s Cross neighbourhood in London used to be quite sketchy, full of smoke-belching pubs and Dickensian characters loitering in doorways. Now it’s posh and gleaming, fit for the 21st century, home to Google’s new headquarters and restaurants you can take your mother to. Oh, and facial-recognition cameras as well.
That’s right, the developer of the King’s Cross neighbourhood is using facial-recognition software in its surveillance cameras, The Financial Times reports. Why? To what end? Who knows? A spokesperson for Argent, the developer, told the BBC that the goal is to ensure “public safety,” which suggests that “safety” means something in corporate speak quite different from its normal English usage. The local council government knows nothing about the software. You, walking down the street stuffing a sausage roll into your unshaven gob, do not know anything about it. You certainly have not consented, by your mere presence, to have your face matched to some dubious database.
Europe has strict privacy laws, and because Britain is still part of Europe, at least for the next couple of months, this kind of intrusion is causing all kinds of waves. London’s mayor is unhappy. Britain’s privacy watchdog is investigating: “I remain deeply concerned about the growing use of facial recognition technology in public spaces, not only by law enforcement agencies but also increasingly by the private sector,” Elizabeth Denham, the Information Commissioner, wrote while announcing an investigation into the King’s Cross issue.
I’d like to think that many others are also deeply concerned about the unregulated spread of facial-recognition technology, but I’m not actually sure that’s true. You should be, certainly. The city councils of Oakland and San Francisco, which banned the use of the technology in municipal services, are worried enough to have taken action. But, as in so many things, we appear to be divided between the haves and have-nots. You might be quite bullish about this new toy if your kids are at summer camp and it allows you to see them making bead necklaces, or if it speeds up your passage toward the airport duty-free shop by two minutes. You might be a bit more leery about facial-recognition technology, quite understandably, if you’re a protester in Hong Kong who’s being targeted by police, or a person of colour in North America, whose community is already overpoliced and is now subject to a new tool that is very bad at recognizing your individuality. And could get you targeted in the absence of any other evidence.
It’s one thing when you use an image of your face to unlock your phone; that’s your decision. It is quite another when the technology is being deployed by state actors or private corporations, without oversight, consent or debate, for the purposes of security or profit. As Kate Crawford of the AI Institute at New York University wrote, “These tools are dangerous when they fail and harmful when they work. We need legal guard rails for all biometric surveillance systems, particularly as they improve in accuracy and invasiveness.”
Those protections are not in place, not even close, and we have not begun to have meaningful debates about what the trade-off should be between privacy, security and convenience. The use of facial recognition in policing, for example, is particularly fraught. In Montreal, where police have declined to say whether they use the kind of software employed by forces in Toronto and Calgary, city councillor Marvin Rotrand is trying to have the technology banned.
It came to light in May that police in Toronto have been using facial-recognition software to match mug shots with video footage or still photos to identify possible suspects, a pilot program that was not exactly widely publicized. The Canadian Civil Liberties Association wrote in opposition to this “stealth” initiative that “facial recognition technology is a mass, indiscriminate, disproportionate, unnecessary, warrantless search of innocent people without reasonable and probable cause.”
At least the Canadian police are not using “live” facial recognition employed on their body cameras, a highly controversial practice that’s being legally challenged in both California and Britain. The technology’s not just controversial and morally dubious, but also appears to be dangerously ineffectual. This month, the American Civil Liberties Union used Amazon’s Rekognition software (heavily marketed to U.S. police forces) to match a database of 25,000 mug shots against photos of California state legislators. Almost 20 per cent of the time, the database incorrectly identified one of the lawmakers as a criminal – and almost half of those mistakes involved people who were minorities.
I’m sure that percentage would not come as a shock to the Algorithmic Justice League, whose founder, Joy Buolamwini, is a computer scientist who studies bias in software design. The group’s Safe Face Pledge asks the creators of facial-analysis tools to commit to certain guidelines: transparency, eradicating racial and gender bias in design, and, crucially, not creating any technology that will endanger human life. Alongside that, it seems to me, we should have an ordinary-human pledge, along the lines of: “I promise not to sleepwalk into a future I have only seen on Black Mirror.”
At this point, sleepwalking is the order of the day. Who is overseeing the “mass, indiscriminate” roll-out of this technology, in all aspects of our lives? The answer seems to be “nobody” at this point, apart from worried researchers, privacy specialists and the occasional heroic citizen, such as the one who noticed facial-recognition software being employed at a directory in a Calgary mall, and complained to the province’s privacy commissioner about it. We can add to that list certain employees of Amazon, who have protested the security use of the Rekognition software. The rest of us have turned a blind eye, perhaps, in our rush to beat the crowd to duty free.