Skip to main content
opinion
Open this photo in gallery:

Sam Altman, CEO of OpenAI, arrives at an AI Forum hosted by Senate Majority Leader Chuck Schumer at the U.S. Capitol, in Washington on Sept. 13, 2023.HAIYUN JIANG/The New York Times News Service

Nicola Lacetera is a professor of strategic management at the University of Toronto Mississauga and Rotman School of Management.

With the development and fast improvement of artificial-intelligence capabilities in the past 15 years, a worrying, long-standing tech approach has gained further traction.

“Ask for forgiveness, not for permission”: The phrase, attributed to the late U.S. admiral, Grace Hopper, has long been a mantra in Silicon Valley. Innovators can’t waste time waiting to clarify regulatory requirements, let alone for ethical considerations about their products. New technologies evolve quickly and are naturally beneficial, and any bureaucratic delay is a cost for humankind. Breaking things allows moving faster.

This narrative, in fact, predates the digital economy. In 1970, Economics Nobel laureate Milton Friedman claimed that the social responsibility of business is to maximize profits. It is up to the government and the good heart of shareholders to then correct distortions such as social or environmental damages. The percolation of these ideas in society culminated with the advent of the internet and the emergence of large tech companies.

By reducing costs to access, store and share information, the World Wide Web appeared as a powerful, positive force that would enhance everybody’s opportunities. Holding tech companies accountable for the content posted on their platforms, or limiting the size and influence of single corporations, for example, would tame this gale of creative destruction, thus obstructing tech entrepreneurs’ quest to “make the world a better place.”

Government overregulation could jeopardize Canada’s artificial-intelligence chances

Concerns for abuse of market power were less and less relevant, sharing personal data in exchange for “free” services became a matter of personal responsibility, and in the name of free speech, any restriction to what could be posted online became a taboo.

Which brings us to AI. To many business leaders, intellectuals and academics, AI is just a tool, one with unprecedented predictive and now also generative capabilities. It is also a “general purpose” technology, because it applies widely to many industries, just like electricity. And who would want a world with limits to the use of electricity?

Except, AI is no electricity. Electricity does not learn or predict preferences and behaviour, and does not generate texts, programming codes, songs and images. Online platforms whose revenues rely on advertisement (from Facebook to YouTube), and therefore benefits from more user engagement, have been using AI to exploit people’s tendencies to pay more attention to news that confirm their own ideological beliefs, and to be more active when they feel aroused and angry.

They have done this by accurately predicting users’ preferences (and weaknesses) and creating information bubbles. Demagogues and autocrats have leveraged these strategies to push misinformation, polarize and ultimately pollute the public discourse, and affect the outcomes of the most important events of the past few years, from elections to the assault on democratic institutions.

The emerging and quickly improving generative capabilities of AI enable the diffusion of information and images that, while fabricated, resemble reality in form and content. Imagine what those demagogues and autocrats can do with these tools, coupled with platforms that enable massive diffusion of information that most people cannot tell if true or not.

Imagine these tools in the hands of pedo-pornographic rings, now that turning the picture of a high-school girl into an equivalent, hyperrealistic nude version to circulate on the web (98 per cent of all deepfake videos have pornographic contents; in 99 per cent of those, the victims are women, often underage).

Eventually, in a rather optimistic scenario, false information is figured out, and fake material taken down. But by the time “eventually” comes, the fate of democracies are determined, and people suffer long-lasting trauma.

Faced with these challenges and the possibility of long-lasting and hard-to-revert effects, the European Union’s recently approved EU AI Act is turning the “apology-permission” proverb on its head. The details do matter, of course, but the overall message is clear: a view of AI as a neutral, general purpose technology to be promoted without predefined constraints is obsolete and inappropriate.

Under the act, AI systems that may manipulate behaviour and impair informed decision-making, classify and score people based on sensitive traits, or harm vulnerable populations (based on age or disability, for example) are to be prohibited, because the risks of harm outweigh the alleged benefits. Information on whether a text or image is AI-generated must be explicit.

In these and other domains, we just cannot afford waiting for posthumous apologies, of the kind we saw in the U.S. Congress in response to evidence of suicides caused by social-media interactions. Protecting individuals, society and democracy overall is worth some sacrifice of the productivity gains from unleashing AI (gains that, according to authoritative scholars, are more hypothetical than real).

Another catchy proverb in North American tech circles is “America invents, China replicates, and Europe regulates.” Well, thank God for Europe.

Interact with The Globe