As it’s piloted throughout Canadian health care, artificial intelligence (AI) is poised to tackle long-standing problems that have put the system on the brink.
That’s if the technology manages to overcome a range of technical, practical and ethical challenges, experts say.
In Canada, the number of hospital beds per 1,000 people has been on the decline since the late 1980s, from 6.8 in 1985 to just 2.5 in 2019 – the third lowest among G20 nations – according to the World Bank. The health care system is also struggling with a shortage of workers, especially in Ontario, which is expected to need 33,000 more nurses and personal support workers by 2028.
AI is emerging as a compelling solution to the seemingly impossible task of improving hospital capacity and patient outcomes – and reducing wait times and staffing needs – without the cost and time constraints of building more hospitals and training more staff.
“This is about the sustainability of our publicly funded health system,” said Roxana Sultan, the chief data officer and vice-president of health at the Vector Institute, a non-for-profit AI corporation based in Toronto. “We can’t just keep ad infinitum adding more space, adding more people; there have to be solutions that enable us to work in a more innovative way to address these issues.”
The Hospital for Sick Children in Toronto is currently testing one such solution– an AI tool that orders tests for patients upon arrival based on their symptoms, rather than waiting for a doctor to make an initial assessment.
“The AI-enabled solution can automatically order tests that are specific to that patient’s symptoms,” explained Azra Dhalla, the Vector Institute’s director of health AI implementation.
“By the time the patient sees the health care practitioner those test results are already available, and that reduces the amount of time they actually need to be at the hospital.”
City Space podcast: Why are ER wait times so bad in Canadian cities?
According to a Sick Kids spokesperson, the technology is expected to cut two to three hours off emergency-department wait times. Other AI solutions, meanwhile, are helping hospitals anticipate demand with greater accuracy than previously imagined possible.
In October of 2020, for instance, hospital network Unity Health Toronto rolled out a tool which predicts with a week’s notice how many patients will visit any given emergency room at any given time.
“We can tell you that on Saturday from noon to 6 there will be 82 patients waiting in the emergency department; 10 of them will have mental health issues, 12 will be harder to treat, the rest will be easier,” said Dr. Muhammad Mamdani, the vice-president of data science and advanced analytics at Unity Health Toronto.
The technology, which was rolled out at St. Michael’s Hospital in Toronto in 2020 and has been adopted by St. Joseph’s Hospital, another Toronto-based hospital a few kilometres west, considers everything from historic patient flows to weather forecasts to major city events – such as concerts and marathons – to generate ER traffic forecasts with 94 per cent to 96 per cent accuracy. Dr. Mamdani adds that similar technology is also being utilized to free up hospital beds sooner, while improving patient outcomes; even saving lives.
“It’s a machine-learning model that ingests data every hour on the hour,” he said. “It categorizes patients as low, medium and high risk; as soon as it reaches the high-risk threshold it pages the medical team and our protocol is the medical team has to see that patient within two hours.
“We’re seeing significant decreases in mortality among high-risk patients as a result of this solution,” Dr. Mamdani said, adding that the same tool also has benefits for patients nearing the end of their care.
He explains that the process for discharging a patient can take a few hours to a few days, often requiring checks and tests from multiple departments and providers. It’s also a very delicate process, as keeping a patient too long can be costly, but sending them home too early could result in further complications. The AI-based solution can predict when patients are two days away from being eligible for discharge, and shares that data with clinical teams to enable them to be more pro-active in discharge planning, thus freeing up more beds faster.
Despite the impressive results, however, Dr. Mamdani warns that the technology can’t be deployed more broadly just yet, as it needs to be custom-built around the unique attributes of each community it’s serving.
“An algorithm used on kids for Sick Kids, I wouldn’t feel comfortable deploying in an inner-city adult hospital,” he said. “You need people who understand all of these challenges to actually be part of the process; you need people to constantly monitor these algorithms to make sure they’re working properly.”
That is what inspired Unity Health to partner with Signal 1, a Toronto-based startup that is commercializing and deploying AI technologies developed by the hospital system to others around the country. The company has already deployed the emergency room traffic prediction tool at Grand River Hospital in Waterloo, Ont., and is in talks with hospital systems across Canada.
Mara Lederman, co-founder and chief operating officer of Signal 1, emphasizes that health care decisions are not being made by AI; rather, the tools inform human decision-makers and streamline the process of gathering that information. “What these tools are largely designed to do is empower these workers with the information that they’re otherwise trying to figure out on their own, or don’t have the time to stop and determine,” she said.
Even amid its proliferation, AI at large is facing valid concerns regarding the potential for bias, data breaches and a growing movement seeking to pause AI innovation led by some of the biggest names in tech.
AI is only as powerful as the data it’s built on, and often those data reflect historic biases – that’s why many advocate for what’s called “explainability,” which requires solutions to demonstrate how and why they came to a particular conclusion, as opposed to a “black box” approach, in which AI makes determinations with little or no transparency into its decision-making process.
“Using explainability allows us to evolve a system to improve over time,” said Dr. Alexander Wong, the Canada Research Chair in AI and medical imaging at the University of Waterloo, who has also developed solutions like an AI-based patient assessment tool to allocate beds in hospital intensive care units. “We need to teach the AI not to focus on these things that are not relevant, and once it knows what it should be looking at it becomes less biased and more fair.”
Dr. Wong adds that healthy debate regarding AI is important, but those conversations should also balance the ethical implications of not using potentially lifesaving solutions. “No system is perfect – there are significant efforts to make it as good as we can – but the key thing to think about is the ability to see a lot more patients and give them a better quality of care,” he said. “I think that outweighs a lot of the limitations.”