Skip to main content
opinion

Erica Ifill is an economist, columnist and founder of the equity and inclusion consultancy Not In My Colour.

Tech is not neutral – it’s Minority Report. Yet the Liberals’ new legislation, Bill C-27, which ostensibly deals with data privacy, contains a regulatory regime that threatens to allow the systemic discrimination of Black, Indigenous, people of colour, non-binary and transgender people in Canada – the people who are typically at the front of the line for discrimination.

While AI could bring us closer to solving some of the world’s most intractable problems, from ending hunger to improving health care, it could also cause severe damage on a mass scale to vulnerable communities. According to Brianna Lifshitz, writing in Georgetown University’s Security Studies Review: “Just as AI offers advancements, there is also the potential for a bleak future – machines are prone to bias and racism. … This is dangerous and the threat needs to be addressed before biased AI systems become ubiquitous. Ultimately, AI systems have the potential to deepen existing systemic inequalities, particularly in industries like health care, employment and criminal justice.”

Bill C-27 ostensibly seeks to mitigate the risks of these harms, with liabilities of either $25-million or 5 per cent of the offender’s gross global revenues if convicted. But what it deems as a harm is so specific, siloed and individualized that the legislation is effectively toothless.

According to this bill, “harm means (a) physical or psychological harm to an individual; (b) damage to an individual’s property; or (c) economic loss to an individual.” That’s inadequate when talking about systems of harm that go beyond the individual and affects some communities disproportionately.

“While on the surface, the bill seems to include provisions for mitigating harm,” said Dr. Sava Saheli Singh, a research fellow in surveillance, society and technology at the University of Ottawa’s Centre for Law, Technology and Society, “the language focuses on individual harm. We must recognize the potential harms to broader populations, especially marginalized populations who have been shown to be negatively affected disproportionately by these kind of AI systems.”

Indeed, what about the loss of freedom if, for example, a law enforcement agency accuses you of being a terrorist? That’s effectively what happened to Maher Arar, a Canadian citizen who was deported to Syria where he was tortured for a year after the RCMP provided faulty information to U.S. officials – and they didn’t even have today’s surveillance technology.

The RCMP and the Toronto Police Service are among the agencies that had, at least at one point, accessed Clearview AI’s facial-recognition technology, which allows law enforcement and other companies to match photos against its database of more than three billion images, which were compiled without the consent of Canadians, including children. “What Clearview does is mass surveillance and it is illegal,” said former federal privacy commissioner Daniel Therrien. “It is an affront to individuals’ privacy rights and inflicts broad-based harm on all members of society who find themselves continually in a police lineup.” (Clearview says it has deleted the photos of Canadians after three provincial orders and that it has not been used in Canada since 2020 – but it could still return.)

That’s another problem with this bill: what Clearview AI did would potentially not violate this legislation, as long as it’s “a product, service or activity that is under the direction or control of (a) the Minister of National Defence; (b) the Director of the Canadian Security Intelligence Service; (c) the Chief of the Communications Security Establishment; or (d) any other person who is responsible for a federal or provincial department or agency and who is prescribed by regulation.”

There will also be no independent oversight. The AI and Data commissioner, who is to provide enforcement, will be an official from the Ministry of Innovation, Science and Economic Development (ISED) chosen by the ISED minister – meaning that only a politically appointed public servant would stand between the minister’s interpretation of harm and actually addressing it. Given that ISED presented this bill as is, it begs the question: what were the results of the department’s consultations with experts and their own gender-based analysis-plus (GBA+) on this policy? We should ask, since ISED was named as part of the class-action lawsuit of systemic discrimination launched by Black public servants against the federal government.

The ISED minister would be the lone authority for identifying, assessing and determining whether an individual has been harmed – not experts in the field or a panel of adjudicators. This centralizes an extraordinary amount of power in one department to address the potential systemic discrimination these systems will unleash. What’s worse, few Canadians seem worked up about it – potential future Maher Arars be damned.

Keep your Opinions sharp and informed. Get the Opinion newsletter. Sign up today.

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe