YOU’RE NOT A MOUSE. SO WHY SHOULD YOU BE TREATED LIKE ONE?
We’re inundated with declarations about the latest thing that’s good for us. But behind some claims are studies based on rodents, small sample sizes and shoddy science. How do we get off the wheel? Adriana Barton offers a crash course in health literacy
To better understand health information, it helps to know the lingo:
Sample size: The number of participants or group of participants in a study; in general, larger sample sizes lend more weight to study findings.
Intervention: The drug, medical device, procedure or other treatment being studied.
Control group: A group of participants, or “controls,” who do not receive the interventions in the study but are used as a comparison when researchers are evaluating results in the treatment group.
Placebo: An inactive substance or treatment given to the control group that is designed to be indistinguishable from the actual drug or intervention being studied.
Double blind: A study condition in which neither the participants nor the researchers giving the interventions know which participants have been assigned which treatments.
Pilot study: A small-scale research project designed to test a hypothesis in advance of a full-fledged study.
Observational study: A study in which participants are not assigned to specific interventions but are assessed over time for health outcomes. Observational studies may find associations between certain habits and health conditions but cannot reliably determine cause-and-effect.
Clinical trial: A study using human subjects to evaluate the effects of an intervention on health outcomes.
Randomized controlled trial: A “gold standard” study design in which participants are randomly assigned to either treatment or control groups, with an aim to reduce selection bias.
Adverse event: An undesirable change in a participant’s health that occurs during the study, or within a set period of time after study completion.
Meta-analysis: A method for contrasting and combining results from all relevant studies, in the hope of finding patterns or testing the robustness of the main findings using statistical techniques.
Would you take fitness tips from a mouse? Would you stop drinking red wine based on a study of Italian senior citizens? What about cutting coffee from your diet because of an observational study with no real cause and effect?
Of course not, you say. But articles about wine, chocolate and coffee studies, and a recent controversial report urging readers to “get out of your body’s comfort zone” based on a study of exercising mice, imply you should change your diet and fitness choices based on rather narrow conclusions – even the biochemical changes in rodents.
Many readers know better than to put too much faith in animal studies. But surely we can trust weight-loss advice from a heart surgeon? What if his name is Dr. Mehmet Oz, and he – despite having all the medical credentials and reputation as America’s celebrity health guru – was forced to appear before a U.S. Senate Committee investigating false advertising?
Now that reams of medical information are at our fingertips and health products are promoted day and night, “it’s difficult for patients to wade through what’s valid and what isn’t,” said Dr. Sharon Domb, a family physician at Sunnybrook Health Sciences Centre in Toronto. Domb said she appreciates a well-informed patient, but added that Internet users often walk into her office misinformed. Online, it is all too easy for unscrupulous individuals to post things that “sound quasi-official,” she said.
Nevertheless, health consumers are consulting Dr. Google in droves. More than 70 per cent of U.S. adults look for health information online, according to a 2012 survey by the Pew Research Center in Washington. Websites, television and other media outlets have long surpassed health-care providers as sources of health information, said Dr. Louis Hugo Francescutti, president of the Canadian Medical Association.
“On the whole, the Internet has been a good thing,” Francescutti said. But he cautions people to “really do your research” before going into a panic because a mole on the cheek resembles an online photo of a cancerous growth.
Even rigorous research studies should be read with a proverbial grain of salt, Francescutti said. Study results may not hold true for patients who differ in age, sex, socio-economic status or ethnic background from participants studied, he explained: “You can’t take one study and apply it to the entire population.”
In the age of information overload, it’s up to readers to become “healthier skeptics,” said Gary Schwitzer, a former CNN health journalist and current publisher of HealthNewsReview.org, a group of more than two dozen physicians and science writers who grade health reporting by major U.S. news organizations.
Here’s how to distinguish reliable health information from celebrity endorsements and other misleading reports that could waste your money or put your health at risk.
The Internet is a font of credible health information – if you know which sites pass the test, Francescutti said. Reputable sources include the Mayo Clinic, Johns Hopkins University and the U.S. Centers for Disease Control and Prevention. For searches beyond these well-known sites, Francescutti recommends checking out the U.S. National Library of Medicine’s online tutorial on how to evaluate which Internet sources can be trusted. “You have to be careful of snake-oil salespeople,” he said.
As for health news, members of the public should look for reports that analyze new studies, drugs, products or treatments with a critical eye, Schwitzer said. Health reports should include independent assessments of the evidence, as well as discussions of the costs, benefits and harms associated with the new drug or treatment compared with older approaches, he said.
Articles about research conducted on animals or human cells – but not yet in humans – should include caveats about how the results may not apply to people. (It was the normally excellent New York Times that came under fire recently for the blog that quoted a researcher urging human behaviour based on his study of exercising mice.)
For research involving “surrogate markers,” such as blood pressure or cholesterol levels, reports should explain that numbers on a graph do not necessarily translate into the kind of health outcomes that matter to patients, such as a decreased risk of disease or premature death. For example, a drug that raises “good” cholesterol may not reduce the incidence of heart attacks and strokes.
Medical research is not only a science but also a business, Francescutti pointed out. Organizations in what he calls “the health-industrial complex” reap rewards when they present new treatments or study results in the best possible light, whether it’s an academic institution competing for research grants or a medical technology company launching a new product.
In reality, however, medical science is far less certain than news releases from research institutions may suggest. Francescutti cautions readers to be leery of reports that use words such as “breakthrough,” “stellar” and “ground-shaking” to describe new research findings and treatments. “Sometimes results that are promising in the lab are still 20 years away in terms of having impact on patients,” he said.
All too often, new research findings and treatments are not properly vetted before they hit prime time, Schwitzer said. Of 1,889 media reports on health research published from 2006 to 2013, Schwitzer and a team of 38 physicians and science writers found that half relied on a single source for the article, or failed to disclose the conflict of interest of sources. Their review was published this month in JAMA Internal Medicine.
Anecdotes from patients with specific ailments may help readers relate to a health issue, but unless they are balanced with an overview of potential side effects, patient dissatisfaction or treatment alternatives, patients’ emotional stories risk putting “an overly positive spin” on the drug, device or procedure being discussed, Schwitzer and colleagues wrote.
Another red flag is the failure to distinguish between a correlation and cause. One observational study found that people who developed certain health conditions were more likely to be coffee drinkers, but did not conclude that coffee caused disease. That didn’t stop health bulletins from blaring “coffee can kill you,” the authors wrote.
Similarly, patients should be wary of health information that does not explain risks and benefits in absolute terms, Domb said. An alarmist report may state that a new birth-control pill is associated with a 50-per-cent increased risk for blood clots, but if the absolute risk is an increase from 1 in 10,000 patients to 1.5 in 10,000, the true risk is “infinitesimally small,” Domb said. “It’s very easy to misconstrue statistics.”
Story continues below advertisement