Skip to main content
opinion
Open this photo in gallery:

Yoshua Bengio discusses artificial intelligence, democracy and the future of civilization at the C2MTL conference, in Montreal, on May 24.Christinne Muschi/The Canadian Press

Yoshua Bengio is a Turing Award-winning full professor at the Université de Montréal, the founder and scientific director of the Quebec-based AI institute Mila, and a Canada CIFAR AI Chair.

From my early days as a graduate student in Montreal, I was drawn to an exciting scientific quest: understanding how human intelligence works. My goal was to contribute to finding the mathematical formulas, much like the laws of physics, that would resolve much of the apparent complexity of our intelligence. This simplified mapping of biological intelligences would then allow us to design artificial ones that could enhance our quality of life, propel scientific discovery and tackle important challenges. My work on artificial neural networks, inspired by the way humans think, has played a significant role in the development of deep learning, the way machines learn through deep neural nets that are similar in structure to human neural networks.

What started off as a pure curiosity-driven research endeavour shifted about a decade ago when industries at large grasped the impact of applying our deep learning ideas to actual products, igniting an accelerating growth in research investments and deployment of AI solutions in the marketplace. In 2019, when I received the Turing Award alongside Geoffrey Hinton and Yann LeCun for our contribution to these breakthroughs, I was proud, optimistic and excited by the tremendous progress and innovation that AI was bringing to the world.

I must humbly admit that it was only this year, as I studied OpenAI’s large language model (LLM) ChatGPT and projected myself into the future, that I realized at an emotional level the magnitude of the risks associated with unbridled advances in AI. I had already been quite vocal about the importance of a responsible and ethical approach to AI, and I had read about AI safety, but I had not connected the dots, because I believed that the limitations of current technology meant that human-level AI was still very far into the future.

Biden wants to move fast on AI safeguards, will sign executive order to address concerns

But as the months passed, and as ChatGPT and similar LLMs continued to make giant leaps, my apprehension steadily grew.

We have already reached a point where AI systems are capable of mastering language and possess sufficient knowledge about humankind to engage in highly proficient – although unreliable – discussions and content creation. The next versions of such AI systems will certainly show significant improvements and continue to rapidly propel us into the future. And, as with any dual-use technology, the same AI capability that can achieve beneficial objectives can also be applied to nefarious goals. We may not need to worry too much so long as those capabilities are not powerful enough to threaten key pillars of our society such as our democracy and national security, but if AI capabilities continue to advance at a breakneck pace, they could end up powerful enough to be dangerous in critical domains, potentially even surpassing us. Geoffrey, Yann and I now agree that, given the recent success of generative AI, human-level AI might be only a few years or decades away – and I feel that society is not at all ready to handle the destabilizing effects and dangers of future and increasingly powerful versions of the technology that I, and many others, have created.

Such a rapid and major change in perspective about my work and its value has been difficult to navigate for me, as I’m sure it has been for many other leading AI researchers. We all want to feel good about ourselves, and denial can be quite comforting. That was certainly true for me over the many years during which I read or heard about AI safety issues without fully digesting their implications at a much deeper level. In recent months, many people have even asked me if I have any regrets or remorse about the potential repercussions of the technology I’ve helped shape. Indeed, major AI risks are a grave source of concern for me, keeping me up at night, especially when I think about my grandson and the legacy we will leave to his generation.

Canadian AI maven Yoshua Bengio issues stark warning to U.S. Senate

However, I have always been pragmatic, quick to mobilize and pivot my efforts toward what can be done. In the face of daunting challenges, I have put my fears aside and asked myself: How can I play my part and contribute positively to humanity? I feel that I have a responsibility to do everything in my power to help avoid the pitfalls of AI, because I still firmly believe that it can make a significant positive contribution to our collective future.

This has become my primary focus and highest priority. I have testified before the U.S. Senate, advised the UN’s Secretary-General, and joined working groups at the OECD and UNESCO, to share my deep concerns about AI’s immediate and longer-term threats to democracy, national security and our collective future, all of which justify swift governmental intervention. In addition to the existing AI harms we already know about, such as discrimination and labour-market disruptions, these risks include that powerful AI could be exploited for disinformation, cyberattacks, excessive power concentration and supporting the design of novel bioweapons or chemical weapons.

I feel strongly that, in the face of these emerging risks, governments must intervene urgently and ambitiously. Britain, for instance, has already seized this moment: In November, world leaders, researchers and top tech CEOs will gather in Bletchley Park at the invitation of British Prime Minister Rishi Sunak, who is positioning his country as the AI safety flag-bearer for the world, announcing significant financial allocations including £900-million ($1.51-billion) for computing resources and an additional £100-million ($167-million) for the country’s new Frontier AI Taskforce, which is mandated to evaluate and mitigate such risks to national security. (I have been appointed to its advisory board.)

Derek Ruths: ChatGPT is blurring the lines between what it means to communicate with a machine and a human

But AI safety must be a multilateral effort. Today, Canada is recognized as one of the top five AI countries in the world in terms of talent, research, innovation and investment, alongside the United States, China, Britain and Singapore. This achievement is in large part because of the ambitious and federally funded Pan-Canadian AI Strategy launched in 2017, which allowed us to attract outstanding talent and achieve cutting-edge research. Now is the time to build upon this ambition. Indeed, I believe Canada has a once-in-a-generation opportunity and moral responsibility to demonstrate its leadership in safeguarding our democracies, human rights and safety.

The first step is scientific leadership, building on our current success. The core scientific ingredients of today’s AI models such as ChatGPT were made possible by Canadian breakthroughs in deep learning, in particular regarding language models and neural network architectures that incorporate attention mechanisms. We should now leverage the strength of our academic institutions, our AI ecosystems and our Pan-Canadian AI Strategy to significantly accelerate research focused on protecting democracy and the public. This would enable better-informed AI policies and safety protocols, and will require expertise and research in AI as well as in social sciences.

Despite increasing risks of AI misuse, we are currently investing exponentially more in developing the capacities of AI, without adequately funding efforts to ensure these systems are safe and under adequate democratic oversight. Moving forward, every dollar invested in research to increase AI capabilities should come with a dollar invested in ensuring its responsible use by deepening our interdisciplinary research in areas such as fairness, safety and governance.

There is also much discussion in political and scientific circles about the need for physical leadership – that is, the creation of international bodies dedicated to mitigating AI risks. Canada would be an ideal host for one of these new organizations. We could even take the lead in co-creating an international, publicly funded research facility that provides powerful AI computing resources and promotes the development of expertise, innovation and interdisciplinary research for safe large-scale AI models that serve society. We are well placed to do so, thanks to our scientific excellence, highly skilled work force and excellent track record as a multilateral convener and a pioneer in the development of responsible AI – in addition to our energy infrastructure’s greener footprint, and a climate that is conducive to minimizing the cost of cooling the supercomputing hardware that would be required.

Finally, governance mechanisms, including binding ones such as laws and treaties, will be essential for mitigating the risks AI poses to core pillars of our society. With the most powerful AI systems currently in the hands of a few private companies, we need to satisfactorily answer questions such as: Who decides how powerful AI will be used? How do we determine what is safe for the public and our democracy? How is the resulting wealth shared?

Canada has shown leadership in governance, first with the Montreal Declaration for the Responsible Development of AI, then as co-founder of the Global Partnership on AI, and more recently by introducing a voluntary code of conduct for generative AI systems and a legislative framework to regulate AI more broadly. I support these efforts, and recommend enhancing Bill C-27 to include key elements such as the labelling of AI-generated synthetic content, setting up a licensing and model registration regime, defining safety obligations and requiring independent audits. In addition to deploying national AI governance, Canada should mobilize its reputation and influence to accelerate international efforts to regulate AI, including at the United Nations level. Canada is uniquely placed – with its commitment to human rights and its reputation as a middle power – to federate multilateral alignment.

With AI set to contribute trillions of dollars a year to the global economy, countries worldwide are actively building their capabilities to deliver economic and social impact, while generating better outcomes for their citizens. I believe we must mobilize the greatest Canadian minds to contribute to a bold, multilateral and globally co-ordinated effort to fully reap the economic and social benefits of AI, while protecting human rights, society and our shared humanity.

We must do it for my grandson – and all of humankind.

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe