Skip to main content

Bill C-27 is ostensibly meant to protect Canadians’ privacy, but its provisions for AI threaten our potential future as leaders in artificial intelligence

The Instagram page of OpenAI, creators of the Dalle-2 image-generation system, is filled with artworks created by artificial intelligence from simple text descriptions: ‘A Shiba Inu dog wearing a beret and black turtleneck,’ or ‘Oil painting of a hamster drinking tea outside.’ Instagram (@openaidalle)

Stephen Marche is a writer based in Toronto. His most recent book is The Next Civil War: Dispatches from the American Future.

When Aidan Gomez first entered Google’s offices in Mountain View, Calif., he heard a familiar sound of home: Québécois French. It was an appropriate sound for the project he was about to undertake because Canadians dominated the team that built the first “transformer,” the technology behind the new wave of generative artificial intelligence. That transformer is the “T” in the AI chat app ChatGPT, which is how most people will have heard of it, if they’ve heard of it at all.

The GPT language model is the foundation of Dalle-2, the frighteningly good AI graphics generation system, and all the other generative technologies that are currently amazing everyone who uses them. The Canadian point of origin for the transformer should be a source of national pride. And eventually, no doubt, this little scene will be the subject of one of those Canadian Heritage Minutes, even though nobody outside of specialists knows at the moment what a transformer is or who built it.

Open this photo in gallery:

Aidan Gomez, middle – shown in 2021 with Cohere co-founders Ivan Zhang, left, and Nick Frosst – is the Canadian co-inventor of the transformer that powers Dalle-2.Fred Lum/The Globe and Mail

But there were two types of those Heritage Minutes, as you’ll remember. There were the ones in which Canadians achieved glorious world-changing feats of innovation, such as James Naismith with basketball. And then there were the other, sadder ones in which Canadians started doing something wonderful and then blew it, such as designing and cancelling the Avro Arrow fighter jet in the 1950s – gutting the potential of a domestic industry and putting an entire company out of business.

Canada’s Artificial Intelligence and Data Act – a provision in the proposed privacy legislation, Bill C-27 – threatens the country’s position as the avant-garde of AI. It will, to a significant extent, determine which of those two futures is more likely.

AI has the misfortune to be the tech revolution taking place in the aftermath of several other ones that have shamefully failed. After WeWork, Theranos and most recently FTX, the credibility of tech leaders is at an all-time low. The mass psychological devastation wrought by social media means that “move fast and break things” won’t fly as a motto anymore. Over the past decade, Big Tech has shown itself to be radically indifferent to the consequences of its products, and proved beyond any doubt that the notion of self-regulation is a joke.

The architects of AIDA, despite its mistakes, deserve our sympathies. There need to be regulations for AI, and what it is, never mind its ethical consequences, is extremely complex and highly disputed, even among the people building it.

But the problem with AIDA is the combination of extreme vagueness of terms combined with the severity of its punishments.

Like European legislation on AI, AIDA will focus on “high-impact” AI systems. It makes sense: The regulators want to concentrate their legal fire on the forms of AI that have the most potential for causing mass harm. The problem is that the AIDA provisions of the bill don’t specify what systems, exactly, are “high impact.” That crucial matter will be fixed by regulation to be determined later.

The potential punishments involved in working on any AI system that could be considered “high impact” are drastic: For starters, penalties of up to 3 per cent of global revenues for basic contravention of AIDA. Then there are even more severe penalties, including imprisonment, for, among other violations, “knowing (or being reckless as to whether) the system is likely to cause serious or psychological harm.” Again, this language is extremely vague on a key point, because in AI, the mechanisms of the system are exactly what can’t be established.

Open this photo in gallery:

Eric Schmidt, then executive chairman of Google's parent company Alphabet, laughs with Prime Minister Justin Trudeau at a Toronto conference in 2017.Frank Gunn/The Canadian Press

Eric Schmidt, former Google chief executive officer, confronted this same conundrum when the European Union proposed its regulations in 2021. The Europeans planned to demand transparency from AI companies – that AI be able to explain itself so that regulators could look at and then deal with the AI processes.

“But machine-learning systems cannot fully explain how they make their decisions,” Mr. Schmidt said at the time – and it’s still very much true. Machine learning works through unfathomable quantities of data, changing literally beyond comprehension. The inexplicability is exactly the source of its power; it thinks through what we can’t. Inexplicability creates such an intense thorniness around regulating processes that any approach from that direction is more or less pointless.

There is no question that this stuff is incredibly powerful and needs regulation, and a clear regulatory framework would be a national advantage. But any meaningful regulation that won’t just strangle the industry will have to focus on outcomes rather than processes.

Let’s also be clear about how much power the Canadian regulators have to affect the future of AI. AIDA won’t stop its development by half a beat. It might alter the geography, shifting the process of innovation outside the country, but it won’t stall the innovation itself.

The danger is that the nascent AI industries in Canada, which are already being pulled away from Toronto and Montreal to Silicon Valley and London by the forces of money and power, will stop holding on and let themselves be pulled. Nobody in San Francisco or London will propose massive fines for unspecific activities determined by whether or not they’re “high impact.” They’ll just want the tech.

To lose the potential of AI would amount to a national catastrophe, the wasted opportunity of a century. The world is in the middle of a tech recession, except for transformer-based AI. In October, Jasper AI and Stable Diffusion received billion-dollar valuations. They are known mainly for text-to-image generation, which are frankly the toy use cases of this technology. Eventually, transformer-based AI will develop vastly more important capacities for linguistic applications and analysis. Mr. Gomez’s own Toronto-based Cohere is at the forefront of this revolution.

Open this photo in gallery:

The Avro Arrow, Canada's first faster-than-sound jet interceptor, makes its public debut in Malton, Ont., in 1957.Harold Robinson/The Globe and Mail

There are very real risks to throwing a virtue blob at the problem of AI systems. Nothing enraged my father, who was a military pilot in his youth, quite like the story of the shafted fighter-jet firm Avro Arrow, so emblematic of Canada’s failure to see the possibilities of its own talent in the name of short-term political virtues (in the case of the Avro, prime minister John Diefenbaker’s belief in the virtue of restraining government spending). The engineers who built the Arrow went on to NASA; Canadians planted glorious seeds and the Americans harvested the fruit.

The current situation with AI is potentially much worse than the situation with the Arrow. Technological mercantilism is a fact of life now in a way that it wasn’t in 1958. Today, a country’s wealth is in the technology it controls. And the potential wealth of emerging AI is immense. There is no reason why Toronto could not be for AI what New York is for banking or what Los Angeles is for entertainment. The question is whether Canada won’t mess it up.

At the heart of AIDA is a more basic, fundamental question of national tendency to self-effacement and congenital resistance to risk-taking. The story of Canada’s contribution to AI will be a source of national pride only if we manage to find sufficient national pride to keep it. Why did we educate the engineers of our country, at great expense, only to enrich Californians? Why do we bother having NSERC grants at all, if, when they uncover a power like the transformer, we just give it away?

The federal government is trying to do to artificial intelligence what it should have done to social media 10 years ago. But these technologies are not at all alike. They’re different cases with different requirements, requiring different regulatory frameworks.

At this moment, we hold the future in our hands. If we squander it, there’s not another future coming down the line.

Interact with The Globe