Government privacy watchdogs are teaming up to probe whether Canadian laws are being broken by Clearview AI’s powerful facial-recognition software.
The rare joint investigation by the federal Office of the Privacy Commissioner and its three counterparts in B.C., Quebec and Alberta was announced Friday, after police organizations acknowledged using the U.S. company’s software.
“Media reports have stated that Clearview AI is using its technology to collect images and make facial recognition available to law enforcement for the purposes of identifying individuals," the watchdogs said in a statement. “The company has also claimed to be providing its services to financial institutions. The four privacy regulators will examine whether the organization’s practices are in compliance with Canadian privacy legislation.”
The watchdogs will specifically explore whether banks have been using Clearview AI’s technology, said Vito Pilieci, a spokesman for the federal Privacy Commissioner.
Selling a technology that’s more powerful than that offered by its competitors, Clearview AI markets its app by telling police detectives that it can determine almost anyone’s identity.
The software tries to do this by first analyzing photos of unidentified faces, and then following up by matching them against an enormous trove of billions of photos that the company found posted to popular social-media sites.
Corporate giants such as Facebook, Google and YouTube have all recently filed cease and desist lawsuits against Clearview AI, saying that this motherlode of facial images was amassed without the consent of the social-media companies or their users.
Parliamentarians in Ottawa say they may soon join in on attempting to rein in this technology.
“Clearview is operating within a judicial and a legislative vacuum," said NDP MP Charlie Angus. In an interview, he said that he will soon be asking a legislative committee focused on data-privacy issues to “examine facial-recognition technology, its use by police and also possible misuse.”
In January, The New York Times published an article stoking privacy concerns about Clearview AI by exposing how police across North America were using the app. The company replied in a statement saying that its product has helped “solve thousands of serious crimes, including murder, sexual assault, domestic violence, and child sexual-exploitation cases.”
Privacy experts say Clearview AI’s data-harvesting techniques could well be considered illegal if they are found to have involved computers indiscriminately trawling the internet to amass images of Canadian citizens.
“I believe [Clearview] are in violation of Canadian privacy laws,” said Brenda McPhail of the Canadian Civil Liberties Association.
Clearview AI did not respond to a request for comment.
Companies that amass personal data “have got to comply with our privacy laws and that includes capturing data with consent," said Colin Bennett, a University of Victoria political-science professor.
Some of Canada’s biggest police and security organizations – including the RCMP, the Ontario Provincial Police and the Canadian Security Intelligence Service – have declined to say whether they have used the technology.
Several municipal police forces in Ontario, however, have recently acknowledged testing the product.
This has led Ontario Privacy Commissioner Brian Beamish to urge police forces in the province to “stop this practice immediately and contact my office.”
Despite this position, he is sitting out the joint investigation announced Friday, pointing out that Ontario’s relatively weak laws don’t allow him to put questions to corporations.
“If Ontario had private-sector privacy legislation, I’d be happy to join in this investigation,” Mr. Beamish said in an e-mail. But as things stand, “we have no jurisdictional premise for joining the investigation into Clearview."