:format(jpeg)/cloudfront-us-east-1.images.arcpublishing.com/tgam/FENLUB3DA5NMVFEC4V3HOO2DBU.jpg)
Storm clouds pass by the Peace Tower and Parliament Hill Tuesday August 18, 2020 in Ottawa.THE CANADIAN PRESS/Adrian WyldAdrian Wyld/The Canadian Press
Matt Malone is an assistant professor in the faculty of law at Thompson Rivers University in Kamloops.
Although many Canadians harbour concerns about the rise of AI, few believe the government is up to the task of regulating it. Internal documents I’ve obtained via an access-to-information request reveal a stark reality: According to surveys recently conducted for the Privy Council Office, just 24 per cent of the population trusts the federal government to implement effective policies to regulate AI.
This should hardly come as a surprise.
Issues of accountability and transparency are at the forefront of many of the ever increasing calls for the regulation of AI. But the federal government’s proposed AI law does not even apply to the government itself. So much for accountability when things go wrong with the government’s own experimentation with AI in areas such as law enforcement, immigration and border control.
As for watchdogs like the Privacy Commissioner, they are no longer fit for purpose. The commissioner’s recent announcement about an investigation into ChatGPT succeeded in getting much attention. But the commissioner’s track record of investigations makes clear the ChatGPT investigation will not conclude until next year at the earliest – and perhaps not until 2025 – likely after OpenAI releases the next iteration of ChatGPT.
Last July, the commissioner announced an investigation into the AI-powered ArriveCAN app, which wrongly told more than 10,000 people to quarantine. Where is the report? The commissioner is still losing its challenge to Facebook for the latter’s conduct in the Cambridge Analytica scandal, which was revealed more than five years ago.
Other watchdogs are just as enfeebled. AI raises concerns of monopolization and deceptive advertising – responsibilities that fall squarely within the remit of the Competition Bureau. But the bureau’s Monopolistic Practices Directorate, the section responsible for reviewing proposed mergers and investigating practices that might harm competition, has not issued a single administrative monetary penalty since 2015. In the meantime, Big Tech companies developing AI, like Microsoft and Google, have acquired literally hundreds of their competitors.
The Competition Bureau itself initiates investigations into less than 1 per cent of the complaints it receives each year. Its Strategic Vision for 2020-24 was to become “a world-leading competition agency, one that is at the forefront of the digital economy.” That is not happening. Last year, the bureau recouped just $3,846,967 in fines, penalties, settlements and investigative costs; meanwhile, its budget was $59.5-million. It is a consistent reality the Competition Bureau costs Canadians more than it saves them.
The need for a more vigilant approach is urgent. As Lina Khan, the chairperson of the Federal Trade Commission in the United States, has noted, monopolistic control is one of the most important threats from AI. The unequalled resources of Big Tech actors such as Microsoft – whose market capitalization already exceeds the GDP of Canada – provide considerable advantages in developing these technologies and strangling competitors.
But monopoly is not the only problem. Precisely as AI technologies are attracting more of our time, attention and trust, the Competition Bureau is letting companies release AI products with deceptive advertising. There is nothing “open” about OpenAI. What began as a non-profit aiming “to build value for everyone rather than shareholders” is now a for-profit vehicle whose main product has been caught praising Hitler, teaching users how to make methamphetamine or bombs, and generating malware and phishing attacks. Tesla’s Autopilot still advertises itself in Canada as having “full self-driving capabilities” – a claim that is patently false.
Tamping down on companies making unchecked claims about AI should be a key form of government oversight and control. Earlier technology products eroded our privacy rights by waging a clever battle at the discursive level, too (“engagement” instead of addiction, “cookies” instead of spyware). We should scrutinize accuracy and truth in the advertising of these technologies. Where risks to health and safety present, we should ban the products or label their risks, as we do elsewhere.
But, of course, none of this is happening. With a government happy to exclude itself from accountability and watchdogs that never bite, it is little wonder so few Canadians trust the government to implement – let alone enforce – effective policies to regulate AI.