When the Canadian Institute for Health Information (CIHI) published a report card on the country’s cardiac centres, it estimated the death rate at the Peter Munk Cardiac Centre (PMCC), the country’s premiere heart surgery facility, should be 1 per cent.
In reality, it was almost double that rate – nearly 2 per cent.
That may not seem like a big difference to a lay person. But surgeons at the PMCC were outraged. The administration of the University Health Network wanted answers. So, too, did donors and patients.
“When CIHI reported results, it looked like our mortality was twice what it should be,” said Barry Rubin, a vascular surgeon and director of the Munk Centre. “That’s not good. And we knew it was wrong.”
In a new paper, published in Tuesday’s edition of the medical journal CMAJ Open, a group of researchers, including Dr. Rubin, explain why it was misleading.
The detailed explanation has to do with the complex mathematic of modelling and risk adjustment, and the differences between administrative and clinical data.
A simpler version is that the way CIHI predicted mortality did not adequately take into account the complexity of patients.
About 2.4 million Canadians live with heart disease; it is the second leading cause of death. Tens of thousands of patients a year undergo procedures such as angioplasty, bypass surgery and valve replacement.
The procedures are remarkably safe and successful. But they are not without risk. Outcomes data is collected and then used to predict risk for future patients. Large variances between expected and actual outcomes raise red flags.
That risk varies a lot based on a patient’s condition. For example, doing triple bypass on a seemingly healthy patient whose blocked arteries were discovered during routine testing is very different from doing triple bypass on a patient who had a massive heart attack and had to be revived using CPR.
The Peter Munk Cardiac Centre, as with other programs located in teaching hospitals, tends to attract the top surgeons as well as the most difficult cases.
Paradoxically, this means the best programs tend to have the highest mortality rates.
CIHI, which collects and publishes data on almost all aspects of Canadian health care, principally uses administrative data – basic information on patients (anonymized, of course), procedures and outcomes.
But there are other, more sophisticated models. One of them, which the PMCC subscribes to, was created by the Society of Thoracic Surgeons.
It captures far more detailed information on patients, notably seven medical conditions that predict worse outcomes after heart surgery: heart failure, abnormal heart rhythm, recent heart attack, very low blood pressure (shock), recent cardiopulmonary resuscitation or a heart pump, or kidney failure.
In the study published in CMAJ Open, the researchers looked at data on 1,635 PMCC patients, 32 of whom died within 30 days of surgery. The CIHI model predicted a mortality rate of 1.03 per cent. The STS model predicted 1.96 per cent – exactly the same as in reality.
It may seem unusual for a heart centre to publicly advertise its mortality rate. But Dr. Rubin said this is important for patients to know because they should be making decisions about surgery based on the most complete information available.
“What we want is an accurate portrayal of risk,” Dr. Rubin said. “The purpose of measuring is not to say how good we are, it’s to identify problems and fix them.”
The other reason for data transparency is that it matters in attracting the best staff. Surgeons are acutely aware of their outcomes, and models that underestimate risk can lead them to avoid higher risk patients.
“We want to be the place that takes care of the sickest,” Dr. Rubin said. “But we shouldn’t be penalized for doing so.”
CIHI, to its credit, acknowledges the shortcomings of the way it measures, and even participated in the new research. But switching from administrative to clinical data would require more time, more effort and more money, and only some centres collect the more detailed clinical data.
While this may seem like a purely technical issue, there is actually an important lesson here for the public and policy makers: Report cards are great, but what and how you measure matters a lot.
That’s an important caveat in an era where we are obsessively measuring, encouraging patients to use data to make wise choices, and even moving to provide funding based on outcomes.
As the new research states dryly, “Caution is warranted.”