Canada’s largest municipal police service has acknowledged "informally testing” a powerful surveillance tool, but said it has stopped its use pending reviews.
The Toronto Police Service last month initially denied using the Clearview AI software. The technology has sparked concerns from privacy advocates who say the use of emerging surveillance tools by law enforcement agencies is not being adequately scrutinized.
In Canada, such technologies can exist in a legal grey area, allowing detectives to advance their investigations while not necessarily disclosing techniques to the accused, judges or the wider public.
“Some members of the Toronto Police Service began using Clearview AI in October, 2019, with the intent of informally testing this new and evolving technology,” the police service said in a statement e-mailed to reporters on Thursday. But Chief Mark Saunders last week “directed that its use be halted immediately upon his awareness,” the statement added.
The police service did not say where and why the tool was used, by which police unit or explain why the chief would not have been told about its use at the outset. Nor did the statement clarify whether it was used in any criminal investigations, or if judges had authorized its use.
The TPS statement says that the police force has reached out to Ontario’s Information and Privacy Commissioner and the Ministry of the Attorney-General to review the technology “and its appropriateness as an investigative tool for our purposes.”
The statement says that until reviews are completed, Clearview AI "will not be used by the Toronto Police Service.”
Clearview AI was thrust into the spotlight in January by a New York Times article revealing that a technology company by that same name was selling its facial-recognition product to police across North America.
Forms of facial-recognition software are now routinely used by police services in the U.S. and Canada, although often in relatively limited capacities – for example, to scan police-held mugshot databases in an effort to compare images.
Clearview’s app was built as a police tool by using billions of publicly accessible photos sourced from popular social-media websites. Because these images were taken from the websites without explicit user consent, social-media giants such as Facebook, Google and Twitter reacted to the revelations by serving cease and desist orders on Clearview AI.
In recent weeks, several Canadian police forces, including the Toronto Police Service, have released statements saying that they were not using the tool.
The Toronto Police Service has previously had to correct other misstatements about the specific surveillance technologies it uses.
In 2015, for example, a TPS spokesman told the Toronto Star that the force’s officers were not using a technology known as IMSI catchers. Such surveillance tools are used by police to indiscriminately grab data from all cellphones in an area, in hopes of tipping off detectives to the phones carried by crime suspects.
But court documents later surfaced showing that police had in fact been using the technology as early as 2014.
Privacy expert Ann Cavoukian said she was very disappointed to learn police in Toronto had been using Clearview AI’s technology.
“Your facial image is your most sensitive biometric,” said Ms. Cavoukian, formerly the Ontario government’s privacy commissioner. “This kind of unacceptable technology certainly should have gone up the ranks and gotten approval from the chief, not just used incidentally by one arm of the Toronto Police Service,” she said.
“We have jurisdictions in the United States that are outright banning facial recognition because of all its inaccuracies,” she added. “It’s a nightmare if you are incorrectly accused.”
Ms. Cavoukian, however, applauded Chief Saunders for putting the tool’s use on hold. “He took the right action,” she said. “We need a proper investigation.”