Skip to main content

The Canadian government’s proposed use of artificial intelligence to assess refugee claims and immigration applications could jeopardize applicants' human rights, says a group of privacy experts.

The warnings are raised in a new report from the Citizen Lab, a group of civic-minded technological and privacy policy researchers at the University of Toronto’s Munk School of Global Affairs, and the University of Toronto Faculty of Law’s International Human Rights Program.

The federal government is already taking steps to make use of artificial intelligence. In May, The Globe and Mail reported that Justice Canada and Immigration, Refugees and Citizenship Canada (IRCC) were piloting an artificial-intelligence program to assist with preremoval risk assessments and immigration applications on humanitarian grounds.

Story continues below advertisement

The report being released Wednesday cautions that experimenting with these technologies in the immigration and refugee system amounts to a “high-risk laboratory,” as many of these applications come from some of the world’s most vulnerable people, including those fleeing persecution and war zones. Artificial intelligence could be used, according to the analysis, to assess an applicant’s risk factor, predict the likelihood that an application is fraudulent, gauge whether a marriage is genuine or whether children are biological offspring.

Citizen Lab is calling on Ottawa to freeze development of all artificial intelligence-based systems until a government standard and oversight bodies have been established.

Canada processed more than 50,000 refugee claims in 2017, and projects it will admit 310,000 new permanent residents in 2018. Canada also processes millions of work, study, and travel applications each year as part of an array of immigration and visa programs.

“IRCC is committed to implementing this type of technology in a responsible manner and working collaboratively with the Treasury Board of Canada Secretariat,” spokeswoman Nancy Caron said in an e-mailed statement. “IRCC is exploring tools that can support case law and legal research, facilitate trend analysis in litigation, predict litigation outcomes, and help develop legal advice and assessments."

Ottawa has recognized that government branches will need guidance when undertaking artificial-intelligence projects. The Treasury Board is currently developing a federal standard for automated decision-making systems, and recently published a draft “algorithmic impact assessment,” which will help departments grapple with the expected and unexpected implications of algorithmic systems.

In procurement documents, IRCC stated it hoped the system could help assess the “merits” of an immigrant or refugee’s application before a final decision is rendered.

“AI is not neutral. If the recipe of your algorithm is biased, the results in the decision are also going to be biased,” said Petra Molnar, an immigration lawyer with the International Human Rights Program and co-author of the report.

Story continues below advertisement

These biases have sometimes become painfully evident. In 2015, Google’s photo-tagging algorithm suggested the label “gorilla” when presented with an image of a black person. To fix the problem, Google eventually removed “gorilla” from the list of possible labels.

Prejudiced algorithms are making their way into essential civic processes, too: In 2016, the U.S. publication ProPublica revealed that COMPAS, a risk-assessment algorithm used in sentencing and parole determinations in Florida, was biased against black people.

“We’re heartened by the fact that the government is thinking through the use of these technologies," Ms. Molnar said, "but what we want to do is bring everyone to the table and look at it from a human-rights lens as the starting point.”

Report an error Editorial code of conduct
Comments

Welcome to The Globe and Mail’s comment community. This is a space where subscribers can engage with each other and Globe staff. Non-subscribers can read and sort comments but will not be able to engage with them in any way. Click here to subscribe.

If you would like to write a letter to the editor, please forward it to letters@globeandmail.com. Readers can also interact with The Globe on Facebook and Twitter .

Welcome to The Globe and Mail’s comment community. This is a space where subscribers can engage with each other and Globe staff. Non-subscribers can read and sort comments but will not be able to engage with them in any way. Click here to subscribe.

If you would like to write a letter to the editor, please forward it to letters@globeandmail.com. Readers can also interact with The Globe on Facebook and Twitter .

Welcome to The Globe and Mail’s comment community. This is a space where subscribers can engage with each other and Globe staff.

We aim to create a safe and valuable space for discussion and debate. That means:

  • All comments will be reviewed by one or more moderators before being posted to the site. This should only take a few moments.
  • Treat others as you wish to be treated
  • Criticize ideas, not people
  • Stay on topic
  • Avoid the use of toxic and offensive language
  • Flag bad behaviour

Comments that violate our community guidelines will be removed. Commenters who repeatedly violate community guidelines may be suspended, causing them to temporarily lose their ability to engage with comments.

Read our community guidelines here

Discussion loading ...

Due to technical reasons, we have temporarily removed commenting from our articles. We hope to have this fixed soon. Thank you for your patience. If you are looking to give feedback on our new site, please send it along to feedback@globeandmail.com. If you want to write a letter to the editor, please forward to letters@globeandmail.com.
Cannabis pro newsletter