Skip to main content

The federal government is unveiling a first-of-its-kind tool that will help departments determine the risk involved in automated decision-making, processes that some agencies are already using.

Due to be unveiled on Wednesday at the Open Government Partnership Summit in Ottawa, the algorithmic impact assessment (AIA) will be filled out by departments looking to use automated decision processes, including algorithms and predictive models, machine learning and artificial intelligence (AI).

As it is filled out, the assessment, which was developed by the Treasury Board of Canada Secretariat, assigns a risk score and details the levels of oversight and safeguards required for the project based on that score. In this case, risk refers to the chance that a project could lead to unfair, unexpected or hard-to-explain consequences.

Story continues below advertisement

The use of automated decision-making by governments has been a controversial topic. With these systems, a computer program is “taught” to recognize patterns and make decisions on its own (for instance, whether a defendant awaiting trial should be released on bail), but results can be unexpected or hard to explain. In the United States, a risk-assessment algorithm used in sentencing and parole decisions in Florida called COMPAS was found to be biased against black people.

Canada’s artificial intelligence boom: How it will change our lives

In 2018, Citizen Lab, a group of civic-minded technology and privacy researchers at the University of Toronto’s Munk School of Global Affairs and the University of Toronto Faculty of Law’s International Human Rights Program published a report on the risks of government use of automated decision-making systems. The report found that using AI for immigration and refugee decisions – something the federal government has been looking into, per procurement documents – amounted to a “high-risk laboratory.”

With the launch of the new tool, federal government departments will now be required to fill out the AIA for any projects started on or after April 1, 2019, that incorporate some form of automated decision-making. By default, completed AIAs will be posted online, although, they can be withheld if they contain details such as classified information or intellectual property.

The Treasury Board of Canada Secretariat has been leading Ottawa’s efforts to build its AI policy. In March, former Treasury Board President Jane Philpott announced the government’s Directive on Automated Decision-Making, which lays out the guidelines for the government’s responsible use of AI.

Ashley Casovan, a director of data and digital and one of the AIA project’s leaders, said the effort is meant to make automated processes safer and more accountable. “This really helps departments have the right guard rails around how to design these systems,” she said.

The AIA is an open-source project, meaning others can take it and make it their own as they see fit. (According to Ms. Casovan, the Mexican government has already adopted the AIA.)

The AIA will also go beyond just producing a risk score: Ms. Casovan said the AIA is designed to make computer-driven decisions explainable or defensible in cases where a person wants to know why a certain decision was made.

Story continues below advertisement

Several automated decision projects have already gone through draft versions of the tool: Ms. Casovan estimates they’ve seen roughly 15 submissions over the past three months. Over the next year, she expects to receive roughly 100 project submissions.

Her team is also looking at procurement. In January, Public Works and Government Services Canada published a list of suppliers preapproved to bid on AI contracts. The final list of companies, which were vetted for their AI talent and ethics, includes well-known Canadian companies such as Open Text Corp., along with data giants such as Amazon.com Inc. and Palantir Technologies Inc., an American company that has come under scrutiny in the past for its work building predictive systems for police forces.

Petra Molnar, a lawyer with the International Human Rights Program at the University of Toronto and co-author of the 2018 Citizen Lab report, called the AIA “a good first step.”

“It’s great to have an integrated tool that all of government can use,” Ms. Molnar said. “But I think we need to make sure that it goes beyond just kind of a box-checking or rubber-stamping exercise.

“One worry is that just because you have an impact assessment, that doesn’t mean you also have a way to then deal with that impact in the future,” she said.

For Ms. Casovan, early results have been promising. In one case, her team worked with Transport Canada on a pilot project that produced risk assessments of air cargo and packages. “By having them go through an early version of the AIA, we were able to anticipate some of the the points that they hadn’t thought of fully,” she said.

Story continues below advertisement

“We’re really trying to balance being innovative and providing services in an effective and efficient way to Canadians … but then also doing it in a way that is protecting the public,” Ms. Casovan said.

Related topics

Report an error Editorial code of conduct
Due to technical reasons, we have temporarily removed commenting from our articles. We hope to have this fixed soon. Thank you for your patience. If you are looking to give feedback on our new site, please send it along to feedback@globeandmail.com. If you want to write a letter to the editor, please forward to letters@globeandmail.com.

Welcome to The Globe and Mail’s comment community. This is a space where subscribers can engage with each other and Globe staff. Non-subscribers can read and sort comments but will not be able to engage with them in any way. Click here to subscribe.

If you would like to write a letter to the editor, please forward it to letters@globeandmail.com. Readers can also interact with The Globe on Facebook and Twitter .

Welcome to The Globe and Mail’s comment community. This is a space where subscribers can engage with each other and Globe staff. Non-subscribers can read and sort comments but will not be able to engage with them in any way. Click here to subscribe.

If you would like to write a letter to the editor, please forward it to letters@globeandmail.com. Readers can also interact with The Globe on Facebook and Twitter .

Welcome to The Globe and Mail’s comment community. This is a space where subscribers can engage with each other and Globe staff.

We aim to create a safe and valuable space for discussion and debate. That means:

  • Treat others as you wish to be treated
  • Criticize ideas, not people
  • Stay on topic
  • Avoid the use of toxic and offensive language
  • Flag bad behaviour

Comments that violate our community guidelines will be removed.

Read our community guidelines here

Discussion loading ...

Cannabis pro newsletter
To view this site properly, enable cookies in your browser. Read our privacy policy to learn more.
How to enable cookies