Skip to main content
opinion

Last month, the European Commission released its proposal for a regulation on artificial intelligence. In its current form, the Artificial Intelligence Act would prohibit certain AI applications outright and impose obligations on others, depending on the level of risk to European citizens’ health, safety and/or fundamental rights. The European Union’s growing ability to unilaterally set global rules for technology through the power of its single digital market (the “Brussels effect”) means that both governments and organizations across the world will now begin preparing for the act’s wide-ranging effects on existing AI operations.

In Canada, the act should raise important domestic and foreign policy considerations for the federal government. First, it should invite greater scrutiny to AI-related legislation and regulation, notably, to proposals contained in Bill C-11, the Digital Charter Implementation Act.

The EU AI regulation should also motivate Canada to leverage its multilateral leadership to promote harmonized approaches to compliance, to promote Canadian values and unlock the economic benefits of increased global co-operation on AI governance.

Some of Bill C-11′s AI-related provisions could have a range of unintended consequences, including prohibitive requirements for small businesses and disruptive effects on international trade. For instance, the bill’s definition of automated decision-making systems – which currently includes any technology that assists or replaces human judgment – stands at odds with the EU’s General Data Protection Regulation (GDPR), which has a more narrow regulation of decisions made solely by automated means, without any human involvement.

The Canadian bill also imposes a blanket obligation on organizations to provide an explanation of any prediction, recommendation or decision that is made using an automated decision-making system, including how any personal data used was obtained. The GDPR, in contrast, follows a more proportionate, risk-based approach, requiring organizations to provide an explanation only where a wholly automated decision affects, and possibly harms, a person, for example, with respect to legal status, financial interests, etc. Bill C-11′s expansive definition of automated decision-making systems, combined with its blanket obligation to provide information regarding their use, would create disproportionate compliance requirements, crippling Canada’s growing AI startup community.

Moreover, Bill C-11 has the potential to inadvertently contribute to rising geopolitical tensions related to AI. Over the past few years, uneven development of national regulations has been a source of contention in international trade, in particular, regarding the transatlantic transfer of personal data between the United States and Europe. Essentially, inadequate protections, or, conversely, excessive regulations, have worked to threaten the cross-border data flows that are critical to developing AI systems, their commercialization and export, and ultimately, the industry’s competitiveness in global markets. By introducing more stringent rules than those proposed by some of our closest allies, Canada risks unnecessarily adding to these tensions with Bill C-11, potentially stifling AI- and data-related trade.

Instead, following an important course correction on Bill C-11, Canada should counter the Brussels effect with its own signature approach – an “Ottawa effect,” so to speak. Canada can do this by rallying international consensus around a plan to develop harmonized approaches to compliance with emerging data and AI regulations, anchored in respect for human rights, democratic values and the promotion of innovation.

As a basis for this plan, Canada should broaden and expedite work begun under the Digital Charter, a policy document outlining the government’s approach for fostering trust in the digital economy. A road map of priority data and AI standards should be developed in light of national and international objectives.

The proposed EU AI Act relies on the emergence of such standards as the principal method for companies to certify that their products are compliant with regulations and ready to be sold in the common market. We should inform these standards with Canadian objectives, and position Canada as a global hub for trustworthy, efficient AI certification. Given Canada’s 14 free trade agreements covering 60 per cent of global GDP, a made-in-Canada AI certification could offer companies unrivalled access to international markets and public procurement opportunities. But we cannot accomplish this if our regulations are at odds with best practice and those of our allies.

To support these efforts, Canada should leverage its leadership position as a co-founder of the Global Partnership on AI, and an active contributor to the Organization for Economic Co-operation and Development, to clarify how standards, conformity assessments and certification programs can help promote compliance with legislative requirements for AI.

The international standardization road map developed through such a meeting could be submitted at the Future Tech Forum convened by the U.K.’s Group of Seven Presidency and in advance of the Biden administration’s Summit on Democracy, planned for later this year.

Through multilateral leadership on data and AI standardization, an “Ottawa effect” could help the international community work toward better global co-operation, including partnerships on R&D, innovation programming and trade, while advancing respect for human rights and democracy – not only in Europe, but worldwide.

Philip Dawson is a Montreal-based lawyer and AI policy adviser. He is senior policy counsel at the Responsible AI Institute and a technology and human-rights fellow at the Harvard Kennedy School.

Your time is valuable. Have the Top Business Headlines newsletter conveniently delivered to your inbox in the morning or evening. Sign up today.