The Department of National Defence tested the use of artificial intelligence last year in an effort to improve diversity in the workplace, but the project was run outside of federal rules aimed at ensuring that the technology is used responsibly.
The Privacy Commissioner’s office said DND failed to provide it with a privacy impact assessment, as required under a Treasury Board Secretariat directive. A spokesperson for the commissioner said the office has reached out to DND for more information.
“The use of AI to make important decisions such as hiring can raise privacy and human-rights concerns,” Vito Pilieci said. “A [privacy impact assessment] should have been completed and submitted prior to the use of these AI platforms.”
Under a separate Treasury Board directive, federal bodies must also fill out and publish algorithmic impact assessments for all AI tools. These measure the potential for bias and other risks associated with using this kind of predictive technology. However, DND did not complete one of those, either.
The Defence Department told The Globe and Mail that it used two AI-driven Canadian hiring services – Knockri and Plum – to shortlist candidates as part of a diversity recruitment campaign. The companies, DND says, provide hiring managers with behavioural assessments and measurements of the “personalities, cognitive abilities and social acumen” of applicants.
Ashley Casovan, a former Treasury Board director of digital and data and an author of the AI policy, said she was always concerned that agencies would ignore it.
“None of these policies were put in place to be a burden to departments,” she said. “It would’ve been easy for [DND] to do it.”
Since April, 2020, when Ottawa’s policy on AI use came into force, only a single algorithmic impact assessment has been submitted. That submission was by the Treasury Board itself.
This isn’t the first time federal agencies have disregarded the Treasury Board’s data directives. In 2016, the department began requiring them to submit an inventory of all the datasets in their possession – yet several never complied.
The Royal Canadian Mounted Police, for instance, has never submitted one. RCMP spokesperson Robin Percival said the agency did “not currently have a timeline for submitting a completed open data inventory,” and that it was consulting with the Treasury Board.
The DND campaign, aimed at recruiting people for the department’s executive ranks, closed on Sept. 25. The department said all applicants were given the choice to consent to the use of AI or go through an alternative screening method.
Andrée-Anne Poulin, a spokesperson for DND, said the department, but not the Canadian Forces, has used AI in its recruitment work. Because “final decisions” weren’t made using AI, she said, it didn’t complete the Treasury Board’s algorithmic assessment. The department did not immediately respond to questions about the privacy impact assessment.
Ms. Poulin said the department conducted internal and external consultations before selecting Knockri and Plum. She also noted that the National Research Council recently completed an “ethics assessment” of Knockri’s service. The Globe asked the NRC if it had conducted a similar assessment of Plum but the agency declined to answer, citing confidentiality issues.
In a statement, Treasury Board spokesperson Alain Belle-Isle did not respond to specific questions about DND, but said policy “requires federal departments and agencies to complete an algorithmic impact assessment” before the technology is employed.
Knockri and Plum’s websites say they offer AI services as a way to reduce bias in hiring decisions. Knockri did not respond to several e-mails for this story, and Plum declined to comment.
DND said Knockri’s service was used to assess behaviours expected of its executive group. Participants answered questions via video, but the department said “visual and auditory identifiers” were not assessed.
In total, the department said it spent $179,000 on the recruitment campaign, which included contracts with the two AI companies as well as consulting fees with Deloitte Canada, and other costs, such as unconscious bias training for staff. The AI services were used to whittle down a list of 422 candidates to 34 “top tier” ones, DND said.
Ms. Casovan, the former senior Treasury Board official, said an algorithmic assessment would have identified possible privacy and fairness concerns surrounding Knockri and Plum. Both companies collect personal information on applicants, including psychological profiles and video and audio recordings.
“What is happening with that data?” asked Ms. Casovan, who now serves as executive director of AI Global, a non-profit group devoted to building tools for responsible use of AI. “Where is it being stored? How long does the company have access to that for? Did it go through the U.S.? These are things that I would have a lot of questions about.”
Fenwick McKelvey, an associate professor of communications at Concordia University who studies the use of AI, said there’s a need for better regulation around the technology.
“This [algorithmic directive] was, I think, a centrepiece of the government’s response to that,” he said. “So if they don’t actually follow it, and they don’t commit to it, it undermines the legitimacy of the whole process.”
Prof. McKelvey noted that using AI has previously led to problems in the hiring process. In 2018, a tool created by Amazon led to unintentional bias against women because it was based on a male-dominated database of résumés.
“It’s kind of a strange paradox here where the very devices which have been called out for their hidden biases are trying to address hidden biases,” he said.
Our Morning Update and Evening Update newsletters are written by Globe editors, giving you a concise summary of the day’s most important headlines. Sign up today.