Jim Balsillie told a parliamentary committee Tuesday that the federal government should abandon its proposed law to regulate artificial intelligence and start again.
The former co-chief executive of BlackBerry Ltd. (BB-T) appeared before the House of Commons industry committee, which is studying Bill C-27. The bill sets out the proposed Artificial Intelligence and Data Act (AIDA), a framework for regulating “high-impact” AI systems to mitigate harm.
“AIDA needs to be scrapped completely,” Mr. Balsillie said in his prepared remarks. The act falls short because it doesn’t create an independent regulator for AI systems and excludes the right for individuals to contest decisions made by algorithms in areas such as insurance, school admissions and credit scoring, said Mr. Balsillie, who is also the founder of the non-profit Centre for Digital Rights.
François-Philippe Champagne, the Minister of Innovation, Science and Industry, introduced Bill C-27 in June, 2022. While the bill is predominantly focused on modernizing the country’s outdated privacy and data protection regime, it also includes proposed AI legislation, including the creation of a commissioner responsible for oversight and enforcement.
The most serious violations under AIDA could result in fines of up to $25-million or 5 per cent of the offending company’s global revenue. AIDA would come into force no sooner than 2025, according to Innovation, Science and Economic Development Canada (ISED).
The push to regulate AI has intensified over the past year as the capabilities of OpenAI’s ChatGPT, which can write coherent text and computer code, and text-to-image generators such as Midjourney have surprised even seasoned AI researchers. The rapid developments have increased concerns about potential negative effects, which include biased outcomes, a surge in misinformation and cybercrime, copyright violations, the displacement of white-collar workers and artists, and a consolidation of corporate power.
The European Union aims to implement its AI Act by the end of the year. On Monday, U.S. President Joe Biden signed a sweeping AI executive order, covering privacy, civil rights and consumer protection while requiring developers of the most powerful AI systems to share safety test results with the government.
Meanwhile in Canada, some academics and digital-rights groups have criticized AIDA. For one, it’s not clear which AI systems will be considered “high-impact” and subject to the law. Others have said the government has not conducted adequate public consultation on the bill, and that the commissioner responsible for enforcing the rules would not be truly independent since the position would sit within ISED, which has an economic development mandate. There are few details in the act, too, and the precise regulations would be written only after the bill passes.
To limit harm before AIDA comes into effect, the federal government unveiled a voluntary code of conduct in September for the development of generative AI.
Mr. Champagne said recently he would make amendments to Bill C-27 to deal with some of the concerns raised over the past few months. In October, he wrote a letter to the committee studying the bill to add more detail to the definition of “high-impact,” which he said refers to the use of AI to make determinations related to employment, health care and content moderation on search engines and social media, among other areas.
But making amendments at this stage is “woefully inadequate,” Mr. Balsillie said on Tuesday. All of Bill C-27, including its privacy components, requires a “wholesale re-do.”
Siobhán Vipond, executive vice-president of the Canadian Labour Congress, told the committee that unions are demanding greater consultation and transparency around the introduction of AI systems in the workplace and in Canadian society. “Unfortunately, AIDA falls short in this respect,” she said.
Ms. Vipond also urged lawmakers to carve out the proposed role of AI commissioner from under ISED, after criticisms from industry experts that such a structure would constitute a conflict of interest.
The Centre for Digital Rights (CDR) said in a written submission that AIDA is only concerned with potential harms to individuals, such as economic loss, and does not address the wider societal impact of AI. It is also narrowly focused on novel, sophisticated forms of AI, as opposed to simpler algorithms that have been in use for years.
“AIDA therefore misses many of the potential harms it is presumably intended to cover, such as those caused by algorithmic amplification of divisive, hateful, sensationalist or politically manipulative messaging,” according to CDR’s submission.
CDR said that AIDA should be sent “back to the drawing board” and the government should start an all-party parliamentary working group to study the topic, engage in “genuine public consultation,” and involve other government departments and agencies in crafting the act.
The lack of consultation about AIDA was also a concern among business groups at the hearing.
“We believe that a more robust consultation process is required,” said Catherine Fortin LeFaivre, a vice-president at the Canadian Chamber of Commerce. “It’s critical that our AI regulations are precise enough to provide important guardrails for safety, while allowing for businesses to harness AI’s full potential responsibly.”
Ms. Fortin LeFaivre urged the committee to move forward with its vote on the privacy portion of the bill, but to return to the AIDA section after conducting a longer consultation and review.
“AI policy is indeed complex. Having the committee attempt to study privacy elements at the same time as the quickly changing AI elements doesn’t provide the conditions for good policy to materialize,” she said.
Public institutions are not covered by AIDA, either. “This kind of exemption is not unexpected in AI regulations, but it’s troubling nonetheless given the temptation of the government to hand over its responsibility to AI,” Ana Brandusescu, a McGill University PhD candidate who focuses on AI governance, and McGill associate professor Renee Sieber said in an e-mail.
Mr. Champagne has said his department has held more than 300 meetings with academics, businesses and members of civil society on Bill C-27, while ISED has said AIDA is intended to be agile in order to deal with the fast-moving nature of AI.