Canada’s banks are investing heavily in artificial intelligence but also ramping up efforts to build guard rails around powerful new technology as clients grow wary of potential risks to privacy and fairness.
As banks roll out AI that analyzes customer behaviour and predicts financial trends, and machines make an ever greater number of decisions on everything from extending credit to managing risk, executives and engineers are keen to show they can harness the technology and explain its consequences.
This stems in part from a reckoning in the broader technology sector, where privacy breaches have made more consumers wary of the power of artificial intelligence and machine learning, when applied to vast sets of sensitive data. The resulting public backlash has forced tech giants to reconsider how quickly they deploy AI and whether they can demonstrate the good it can do for society at large.
“In banking, the same thing is going to happen,” said Tomi Poutanen, chief AI officer at Toronto-Dominion Bank and co-founder of AI company Layer 6 Inc. “These AI algorithms are getting so good that they’re rivalling humans in many cases. So there’s been a bit of a pushback from people saying, ‘Hold on, I’m not that comfortable with the use of my data this way. And I’m not that comfortable with the lack of transparency on the AI applications that are being used.’ ”
For highly regulated banks that form the backbone of the global financial system, the risks in rushing to the frontier of AI are just as serious. Artificial intelligence requires vast amounts of data and computational power, and banks have both in spades, but they are still in the early stages of learning how to use it to its full potential. Now, they are increasing efforts to explain how machines learn and make predictions or decisions, and to correct for potential bias in AI.
“We operate on trust,” said Foteini Agrafioti, chief science officer at Royal Bank of Canada and head of Borealis AI. “We have no other way of doing business, so we have to solve the hard problem. We don’t go fast and break things.”
Banks prioritize AI projects according to business needs; since TD acquired Layer 6 for more than US$100-million in early 2018, the bank has collected a list of 150 possible uses for AI. Some of those are models used to predict individual customers’ retail-banking needs three to six months in advance, so the bank can anticipate which products or services to offer.
In an online survey of 1,200 adults that Environics Research Group conducted for TD, 72 per cent of Canadians polled were comfortable with the use of AI if it means they would receive personalized services from banks. Yet 77 per cent of respondents said they were concerned about the risks AI poses to society, and an equal number worried that AI is advancing too quickly to fully understand those potential risks.
As a result, bankers know they need to head off fears that machines and algorithms could run amok. “Just because it’s cool from a technology perspective doesn’t mean that it’s something that adds value to the broader society," Mr. Poutanen said.
Ms. Agrafioti and Mr. Poutanen, who both spoke about responsible AI at the Economic Club of Canada in Toronto on Thursday, work a few floors apart in Toronto’s MaRS Discovery District. And though they are direct competitors, Borealis and Layer 6 recently came together at a round table convened by TD about how to head off risks and use AI ethically. Also at the table were experts from financial technology firms, consultants and academics.
Some of the issues they discussed involve unintended bias in AI systems and the need to promote diversity in the teams that build AI technology so that it produces fair outcomes.
But the foremost concern was AI’s “black-box” problem, also known in the sector as “explainability” – the notion that a company must be able to explain how its AI arrived at a decision. In some advanced cases, “we see the input, we see the output, we know mathematically what happens in between,” Ms. Agrafioti said. "It’s just that we cannot rationalize that process in a way that humans understand it.”
She added: “How do you trust a system [when] you don’t know how it makes decisions?”
Mr. Poutanen said banks have rigorous processes to validate AI models and can retrain systems if they stop performing as intended. “You can always stress-test it," he said. At TD, “there’s no case where an AI model starts to learn on its own and nobody’s got oversight over it.”
Your time is valuable. Have the Top Business Headlines newsletter conveniently delivered to your inbox in the morning or evening. Sign up today.