Skip to main content
Open this photo in gallery:

The front doors to the Federal Court of Canada are pictured in Ottawa on Oct. 23, 2023.Sean Kilpatrick/The Canadian Press

Canada’s Federal Court is promising to keep its judges from using artificial intelligence in decision-making before it consults the public, a cautious approach that follows controversial judicial uses of machine-learning tools in the United States and other foreign jurisdictions.

Courts in several provinces, including Manitoba, Alberta and Nova Scotia, have issued directives to lawyers, self-represented litigants and intervenors on how they may use automated tools in court filings. But the courts, including Canada’s Supreme Court, have been mostly silent on how judges themselves may use the tools.

The Federal Court said in a statement posted on its website late last month that while artificial intelligence offers the potential for considerable benefits, it also has risks – to the independence of judges and public confidence in the justice system.

“This is an area that warrants great caution,” Chief Justice Paul Crampton told The Globe and Mail in an interview on Monday. “I would be troubled if I thought the judges were using ChatGPT to write their decisions, or if their decisions were in any way being influenced by AI, by machines. Because we don’t have enough of an understanding of their processes and what underlies their algorithms, and what the nature of the biases is.”

The Federal Court is based in Ottawa but its judges travel across the country to hear cases. It deals with matters under federal jurisdiction such as national security, immigration and intellectual property; provides a forum for claims against Ottawa; and reviews decisions by federal ministers and administrative bodies.

Chief Justice Crampton pointed to concerns about the “dehumanization of the law” expressed by U.S. Supreme Court Chief Justice John Roberts in his annual report on the federal judiciary. “At least at present,” Chief Justice Roberts wrote, “studies show a persistent public perception of a ‘human-AI fairness gap,’ reflecting the view that human adjudications, for all of their flaws, are fairer than whatever the machine spits out.”

Chief Justice Crampton said, for instance, that in cases involving refugees, or family reunification, or student or skilled-worker visas, “there are nuances that machines aren’t going to grasp.” Artificial-intelligence programs might offer comments on matters “about a certain group from a certain country” without being sensitive to “considerations that a human would know,” he said.

He said he is not aware of Federal Court judges using content from generative AI. He said, however, that there are legitimate uses of AI intended to speed up court processes; the court’s technology committee plans to try out AI on the translation of court decisions, to be reviewed by a translator or jurilinguist to ensure accuracy.

“The translations get generated in a matter of seconds and then the ‘human in the loop’ would verify it in a matter of days as opposed to weeks and months” – a time saver made more vital by amendments to federal law requiring that all precedent-setting decisions of a federal court be made available simultaneously in English and French. (The amendments come into force in June.)

Legal observers said they appreciate the court’s caution, as artificial intelligence could make the public skeptical about how judges write their reasons for judgment in any given case.

“The use of AI – or the perception of the use of AI – risks undermining the role of reasons,” lawyer Jeremy Opolsky said. “That they are produced not through careful contemplation but at the click of a button. AI could mean a dramatic change in how we administer justice and this requires caution.”

The Canadian Judicial Council, a body of chief and associate chief justices, said in an e-mail it is monitoring developments in Canada and internationally, has held discussions with experts and will continue to discuss the matter, and may at some point issue a directive.

The Federal Court’s policy statement echoes a directive from B.C. Supreme Court Chief Justice Christopher Hinkson last March, which asked judges not to use artificial intelligence. It said that after an incident in which a judge in Colombia used artificial intelligence, Chief Justice Hinkson asked a technology committee to look at its use.

That committee recommended judges not use ChatGPT or similar platforms, finding such use “inimical to the integrity of the judiciary and public confidence in the courts,” in part because the use of AI “raises ethical questions as to whether decisions are a judge’s alone.”

Chief Justice Hinkson said in his directive that he accepted the recommendation, and asked his colleagues on the B.C. Supreme Court to refrain from using artificial intelligence until further notice.

The Colombia case involved a ruling on whether an autistic child’s medical costs should be covered by insurance. The judge in that case asked ChatGPT whether a child was exempt from paying for their treatment costs – an apparent off-loading of decision-making, in the view of critics. In the end, both ChatGPT and the judge came to the same conclusion, that insurance should pay.

It was not the only controversial example of how courts are using artificial intelligence. In Wisconsin, a judge imposed a long jail sentence on a Black offender based on an AI-generated risk assessment.

Gavin MacKenzie, a lawyer specializing in ethics, said he expects that artificial intelligence is already being used by some judges and administrative bodies.

“Lawyers (including me) make use of AI in their practices,” he said in an e-mail. “I would be surprised if at least some judges and members of administrative tribunals haven’t done so.” He called the Federal Court’s approach a sensible one in its recognition of potential benefits and risks.

Gideon Christian, a University of Calgary law professor, is studying bias in the use of artificial intelligence for recidivism risk assessments in the criminal-justice system. He said the courts are wise to adopt a flexible, go-slow approach.

“As the technology continues to evolve, and as guidelines and best practices are developed, we might see a gradual increase in the adoption of AI tools in judicial work, always ensuring that such usage aligns with the principles of justice and fairness,” he said.

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe