Skip to main content
opinion

We know humans are warming the planet thanks to a decades-old field called event attribution – but explaining all its caveats to journalists, politicians and the public is never as simple as it seems. So some scientists are taking shortcuts to defend the larger truth: That the crisis is here, right now

An Austrian glaciologist explores a natural cavity of the Jamtalferner glacier in the eastern Alps this past October. Exploring the caves, which have appeared in glaciers and accelerated their melting process, is part of a worldwide effort to understand the effects of climate change.Lisi Niesner/Reuters

Viviane Fairbank is a writer based in Montreal. She is a former editor and head of research at The Walrus and is currently creating a guide for fact-checking journalism in the post-truth era.

In early May, 2017, torrential rain fell on the Ottawa River Basin, a watershed that stretches from eastern Ontario to Montreal. It poured for days, the water combining with snow and frozen soil, and by the end of that week, the watershed had reached its highest level in more than 50 years. All around the region, the river overflowed, seeping into people’s homes and causing more than $220-million worth of insured damages; several cities in Quebec declared a state of emergency, issuing mandatory evacuations. Near the south shore of the St. Lawrence, a 37-year-old man and his daughter were pulled by a current and drowned.

Within a week, the water retreated, and life slowly returned to normal. But a feeling of dread lingered: Residents had never experienced this degree of flooding before. “We can’t say that the [specific] flooding events we’re seeing is a result of climate change,” an environmental writer told the Montreal Gazette at the time, “but we can say that because of climate change, these types of episodes and catastrophes are becoming more and more frequent.” In short, scientists suspected the climate crisis was implicated, but they couldn’t say exactly how.

People look out at the swollen Ottawa River in May, 2017, as heavy rain flooded the watershed.Justin Tang/The Canadian Press

That same week, Laxmi Sushama, a civil engineer and specialist in sustainability at McGill University, was hosting a workshop focused on improving computer models of the Canadian climate. Her research team had spent the past few years refining the resolution and accuracy of regional climate models, such as the ones used by Environment and Climate Change Canada to understand weather patterns. Some of the team’s members were dealing with effects of the inundation in their own neighbourhoods, and so they decided to try to use their models to answer the question everyone had been asking: Had climate change caused the devastating flood?

For 40 days straight that summer, on one of the largest supercomputers in Canada, Dr. Sushama and her colleagues ran nearly 1,000 simulations of the climate over the Ottawa River Basin. The resulting study, which was published the following year, pinned down what the environmentalist in the Gazette had suggested: Anthropogenic (human-influenced) global warming had made the flood two to three times more likely to occur.

It was one of the first examples in Canada of event attribution (more formally known as extreme-event attribution), a recent innovation in the broader field of climate-attribution science, which is the study of the causes and effects of climate change.

Back in the 1970s and ‘80s, scientists wanted to determine whether the planet really was heating abnormally, and whether this was primarily because of human activity. To do this, they ran historical data about metrics such as temperature and carbon dioxide levels through computer models that simulated the earth’s climate over long periods of time. This helped them understand how the climate might change when certain variables (such as burning fossil fuels) were introduced. If a model ran several times with all the possible variations, and a change (such as rising temperatures) happened only when the model included greenhouse gas emissions, then it was reasonable to attribute that change to human activity.

Attribution science, in short, is how we now know that climate change is happening, and that people, and not just natural phenomena, are causing it.

Traditional attribution science is concerned with the climate, which is a broad description of what it’s like on the planet, averaged over 30-year periods. But in terms of day-to-day life, that kind of research is highly abstract: we don’t experience the climate, per se – we experience weather, which is the state of the climate at a given place and time. Unlike the climate, weather changes from day to day (even minute to minute). New event-attribution research, then, tries to understand the impact of human activity on those day-to-day happenings.

The message of event-attribution research is simple and concrete: Climate change has altered the weather you experienced today. Event attribution “makes climate change seem more real to people,” says Nathan Gillett, a researcher for the Canadian Centre for Climate Modeling and Analysis (CCCMA) in Victoria, B.C. “It helps to motivate people to do something about it.”

Thick smoke obscures the sun in Monte Lake, B.C., this past August after the White Rock Lake wildfire swept through.Darryl Dyck/The Canadian Press

Thanks to some of the latest event-attribution studies, we know that anthropogenic emissions are a cause not just of general warming in Canada, but of the heightened risk of fires and heatwaves in Western Canada, the increased frequency of flooding near Canadian coasts, and the reduction of small ice caps and shelves in the Canadian Arctic, most of which will be completely gone by 2100.

In recent years, climate scientists tell me, they have experienced a hugely increasing demand from media and policy makers for event-attribution results. Indeed, as a journalist whose job includes communicating scientific facts to the public, I’ve found event-attribution research exceptionally helpful in understanding and communicating the consequences of climate change. Environmental journalists have long felt frustrated that their colleagues’ news coverage doesn’t “connect the dots” between the weather and climate. Last June, for example, one American journalist analyzed 149 local news stories about a record-breaking heatwave in Colorado and found that only 4 per cent had made “even the slightest reference to climate change.”

This finding, he tweeted, represented “just a total failure [on the part of journalists] to perform our basic function to tell people what’s happening, what’s important and why.” I’m inclined to agree: There’s something intuitively wrong about a journalist reporting on recent dangerous weather events (such as deadly hurricanes, floods, and wildfires) without including a reminder to readers that avoiding such problems (and worse) in the future will require some kind of admission of guilt about climate change.

Event-attribution research gives journalists the tools they need to rigorously connect the necessary dots: With an event-attribution study in hand, a journalist can report on a weather event, then tell readers how climate change is implicated in its happening (which it nearly always is).

Over time, however, I’ve come to realize that the scientific story about event-attribution research is not as simple as we journalists tend to make it out to be. It turns out that dramatic event-attribution findings often come with significant caveats; a calamity such as a deadly heatwave can never be directly and exclusively blamed on global warming.

Like attribution science in general, event-attribution science relies on statistical climate models. But a specific weather event is a much more difficult thing to pin down statistically than a broader climate pattern. Owing to their difference in scale, it is, practically speaking, impossible to connect a single weather event to a decades-long change in climate; all climate change can be blamed for is having increased or decreased the likelihood of the climate conditions that make such weather possible.

A protester at the U.S. Chamber of Commerce in Washington this past October.Jacquelyn Martin/The Associated Press

To understand the nuance of this distinction, I find it helpful to think of the stock market: Climate-attribution science is like looking at long-term trends in the stock market, whereas event-attribution research is like studying a temporary financial crash. Long-term analysis can help us gain a more accurate perspective on robust trends for the future of the financial market, but it certainly can’t explain short-term blips and anomalies along the way. Saying that a single flood was caused by climate change is no more reasonable than saying that the sudden crash in bitcoin during a couple hours in April was caused by its steady growth in price over the past decade. From a strictly theoretical scientific perspective, then, event-attribution results are not all that useful – or even intelligible.

Of course, in the years since attribution science was first developed, climate models have become vastly more intricate, incorporating clouds, aerosols, oceans, swamps, volcanic eruptions, vegetation and permafrost. The models are run on some of the most powerful supercomputers in the world; a typical program contains more than one million lines of code, enough to fill 18,000 pages of printed text. Even with such a rich base of information, however, there’s only so far science can take us: Any modelling research has an unavoidable degree of imprecision, and the more fine-grained we try to conduct our analysis of a broad region over time, the less certain we can be of our exact findings.

I see event-attribution science, then, as questioning the boundary in climate science between theoretical research (focused on the improvement of climate models and statistical analytical methods) and practical communication (focused on the dissemination of results that will resonate with the general public). Attribution scientists still don’t all agree on how the nuances and limitations of their work should be communicated to the public – and whether such work, when accompanied with all the necessary caveats, really can serve the purpose it’s meant to. For Francis Zwiers, the director of the Pacific Climate Impacts Consortium in Victoria, the inherent limitations of event-attribution research make some recent studies “not quite scientific anymore” – and even, depending on how their results are presented to the public, “irresponsible.” For Dr. Sushama, on the other hand, event-attribution work is crucially important, precisely because of its public-facing motivations. Climate change, after all, is a matter of life or death. “I didn’t need to study the Montreal flood: I wanted to do it to show people what is happening,” she says.

Dr. Zwiers and Dr. Sushama represent two different, and, I think, both reasonable, responses to the unusual set of challenges climate scientists face. Their findings, and how we choose to react to them, are helping determine the future of the planet. In order to get people to properly appreciate the severity of the thing that climate scientists are absolutely sure of – that humans are causing the climate to change, with disastrous consequences – some are increasingly motivated to conduct research that is, while less scientifically rigorous, more accessible to the general public.

This is made more problematic by the fact that the general public is largely out of touch with scientific practice, and thus misunderstands the level of certainty one can reasonably expect from scientific research. (The gap between public perceptions of science and actual practice was highlighted, most recently, during the COVID-19 pandemic, when ambiguous scientific results were published, criticized and revised in the public eye.) Many people interpret ambiguity and uncertainty, which are natural indicators of scientific research, as a sign of scientific failure. Denialists have noticed this same gap and exploited it in order to generate doubt among the public about climate science’s probabilistic results.

Thus, paradoxically, some climate researchers find themselves pursing work that is arguably dubious in order to defend absolutely straightforward scientific truths: In short, that climate change is happening, here and now.


Technicians do maintenance on a computer at the Euro-Mediterranean Centre on Climate Change in Lecce, Italy, where sophisticated software tries to calculate what a warming world's future looks like.Janos Chiala/Getty Images


Climate change research found its footing in the mid-20th century, at a time of great advances in the scientific community in North America. It coincided with the rise of computerization, for one thing, which allowed scientists to study large-scale patterns (such as the motions of the atmosphere and oceans) while processing more data and performing more complicated calculations than could ever be done by hand. It was also a period during which the scientific community experienced tremendous growth in size and status. After the Second World War, the American government increasingly relied on expert scientists to develop technologies and provide advice regarding the country’s Cold War manoeuvrings; because of the resulting boost in funding, scientific efforts became much larger, more organized, and focused on building consensus.

During this period, many scientists took strong positions on matters of public policy, especially in areas that drew on scientific research. Discerning Experts, a 2019 book co-authored by several historians of science, describes how, while politicians debated whether to build a hydrogen bomb in the postwar period, leading scientists, including the physicist Robert Oppenheimer, who had previously led research into atomic weapons, publicly opposed such work. After the decision to build the H-bomb was made, Oppenheimer “was humiliated and stripped of his security clearance,” the authors write. “Many scientists were chastened by this episode and saw that reticence on policy questions was a safer strategy than candor.”

The new understanding of science as “neutral” has guided the work of environmental researchers ever since – though not every scientist is necessarily aware of this history, nor willing to discuss its ramifications. To suggest to a scientist that their work is political or value-laden is to raise alarm bells. (When I pointed out to Xuebin Zhang, a researcher with Environment Canada, that event-attribution work might appear more “politically motivated” than other attribution work, he agreed, but he also immediately specified that he preferred the word educational to political.)

Climate research is in this way caught in the midst of an existential battle about what science is, how it should be conducted, and what kind of truth it produces – a battle that has been exploited by certain parties to their advantage. As institutional science established itself in the mid-20th century, people and companies with vested interests in undermining science were also refining their tactics – first by successfully attacking research on tobacco’s links to cancer, then doing the same for pollution and acid rain.

An employee looks at no-smoking signs in a Vienna printing shop.Leonhard Foeger/Reuters

Naomi Oreskes, a historian of science at Harvard University, has spent much of her career documenting the tactics of institutions and individuals who stand opposed to the kind of government intervention that most environmental causes require. In order to avoid outside involvement in the fossil-fuel industry, Dr. Oreskes wrote in a 2004 paper, skeptics create an “impression of confusion, disagreement, or discord among climate scientists” where there really is general agreement.

In engineering, high finance, insurance, and most other specialized or technical fields, the facts of climate change are well accepted because they need to be: Over the past decade, for example, insurance premiums and building regulations have changed to account for higher environmental risks. But within media and politics, broader ideologies continue to affect how likely someone is to accept and disseminate scientific facts. In 2019, an Ipsos poll found that 30 per cent of Canadians will believe only those scientific findings that already line up with their beliefs. The same poll found that nearly half of Canadians saw scientists as “elitists,” and 32 per cent were “skeptical” of scientific results in general – up from 25 per cent in 2018.

Today, one-quarter of all tweets about climate change are produced by bots, most of which are spreading doubt about the validity of scientific findings. And climate skeptics crowd out professional scientists in the media: A study conducted by researchers at the University of California Merced in August, 2019, found that climate “contrarians” were featured in 49 per cent more media articles than scientists. (This is mostly because of the prevalence of right-wing online blogs. But even in mainstream media, contrarians continue to be quoted slightly more than scientists.)

These efforts have had substantial effects on the work of climate scientists. A paper published by the European Journal for Philosophy of Science in 2015 outlined the situation: Not only do skeptics distract scientists from productive work by forcing them to respond to “a seemingly endless wave of unnecessary and unhelpful objections and demands,” but they also create an atmosphere where scientists are scared to mention any kind of deliberation or uncertainty in their work. Fearful of being called “alarmist,” climate scientists have become overly conservative about their views in public.

Supporters of Alberta's oil and gas industry gather at a pro-pipeline rally in Calgary in 2018.Jeff McIntosh/The Canadian Press

In 2012, Dr. Oreskes and three colleagues found that researchers tended to “err on the side of least drama,” choosing to publish findings that are least likely to provoke outrage or pushback – they have been forced into a position of artificial conservatism in communicating their research to the public. The same line of thinking has led to what Dr. Oreskes calls “least common denominator knowledge” in scientific reports: Official assessments typically publish the lowest of statistical likelihoods for any kind of prediction. So, if nearly every climate study finds that the risk of a calamity is at least 10 per cent, but some scientists think it could be as high as 70 per cent, the published probability – the number from which policy is developed – is 10 per cent. To avoid any vulnerability or controversy, anything higher, and any acknowledgement of the disagreement, stays within the closed doors of the scientific community.

To this extent, climate skeptics have succeeded, not simply in sowing doubt about the existence of anthropogenic climate change, but in actually hampering public understanding of scientific research and its results. And for many scientists and environmentalists, I think, event attribution is the perfect tool with which to push back – not by defending the inevitable ambiguity of scientific results, but rather by presenting the scientific facts with as much confidence and certainty as is rhetorically possible.

No serious scientist actually thinks it would make sense to say that climate causes weather; published, peer-reviewed scientific papers include all the necessary caveats to rectify such a simplistic statement. But journalists and policy makers who read scientific studies tend to extract only the headlines while shrugging off technical details, and many scientists let them do so because they know that even the (inaccurate) simple statements written by journalists reflect a broad and important scientific truth about the reality of climate change.

The public wants concrete answers about the climate, not statistical hand-wringing. And for some scientists, convincing the public of the basic truths about climate change is the first priority. In fact, as one researcher recently put it, the climate crisis is so dire, and its perceptible effects on the weather so strong, that attribution scientists should shift toward treating climate change as the “null hypothesis.” Though we may never be able to definitively prove it, we can assume that climate change will have played a part in heightening the likelihood or severity of every extreme weather event in our future, and go on from there.


A plankton researcher looks at a sample of fish larvae from the North West Atlantic at the Marine Biological Association in Plymouth, England, this past August.DANIEL LEAL-OLIVAS/AFP via Getty Images


For most of us who aren’t working scientists, the mental model we have of scientific knowledge is based on the “scientific method.” As children in school, we were told that this was what defined the natural sciences: Regardless of whether the research was in organic chemistry, particle physics, or oceanography, every scientist was described as following the same process of inquiry. Start with a hypothesis, and then test it through experiment. A bit of mixing solutions in beakers and vials; a lot of memorizing the periodic table.

In chemistry class, we were taught about basic reactions, and then we conducted the demonstrations ourselves, expecting to reach the same results as our textbooks described – mix the same chemicals, in the same proportions, at the same temperatures, and you’ll get the same unambiguous outcomes. Scientific knowledge, we learned, was a collection of clear, interconnected, agreed-upon facts about the natural world, discoverable through direct experimentation and reproducible by anyone with sufficient interest and the relevant equipment.

A great deal of the time, however, actual scientific research bears little resemblance to this simplified picture. Scientists’ methodologies vary widely based on the field of research; typically, researchers investigate phenomena they don’t completely understand, their investigations require creating much simpler version of real-world complexities, and they adopt theories that aren’t entirely accurate but are nevertheless explanatorily useful.

This phenomenon is not specific to climate change research: Almost all science is like this. A biomedical scientist researching cancer, for example, might isolate cancer cells in an incubator, allow them to interact with other cells in a controlled environment, then coat them with special dyes in order to visualize the resulting changes in their constitutions. This kind of experiment can point to the behavioural patterns of cancer cells, but it can’t tell researchers precisely why the cells behave the way they do, nor how they will act in a more complex environment such as the human body.

Our mental model of science is, in this basic sense, wrong: Research does not always provide absolute certainty or causal explanations, particularly not in newer domains of research. Crucially, that doesn’t mean the science is faulty. Rather, those limits are a function of the sheer complexity of the real world, which cannot be captured in its entirety by a single experiment or theory, no matter how refined. Generally – though not always – this goes unnoticed by the general public: People continue listening to advice from their doctors without asking many detailed methodological questions about how the drugs they are prescribed were developed. When it comes to climate change, however, because there are so many policy and financial interests at stake (because, like COVID-19, it concerns our general livelihoods and not just individual lives), the conversations often play out very differently.

Members of the organization Doctors For Future examine a model of the Earth at a 2019 demonstration in Munster, Germany, for the German Congress of Physicians.GUIDO KIRCHNER/AFP/Getty Images

After centuries of research, we know as much about the global climate system as we do about the human body. This offers a helpful framework for comparison: Doctors cannot make us live forever, and some health concerns, like malignant cancer cells, remain somewhat mysterious, but few would question the relative success of contemporary medicine, which can diagnose and cure a huge variety of diseases. Similarly, although the global climate is a chaotic, unruly and capricious system, ranging from microscopic chemical reactions to thermodynamic processes that span the entire globe, scientists know enough to provide a generally accurate and useful picture of the whole thing.

Scientists use a variety of intricate tools and methods to gather data from every part of the climate system, which encompasses the air, water, ice, earth, rock and every living creature on Earth. Environmental datasets are the result of work and co-ordination between countless scientists who measure temperature, wind pressure, and a litany of other variables, using local land-based stations, ships, buoys, airplanes, and satellites. Researchers drill into prehistoric ice, slice open ancient tree trunks, and dig into ocean sediments to estimate the atmosphere’s composition from hundreds and thousands of years ago. Twice a day, hundreds of balloons around the world lift radio instruments into the atmosphere to measure pressure, temperature and humidity. And several thousand autonomous robots drift across the oceans, measuring salinity and temperature.

But there are only a limited number of land stations and satellites in existence, and they don’t all provide the same quality or resolution of data. It is impossible to observe everything everywhere all the time. Every measurement gathered introduces the possibility of error, as does every generalization and estimate – which is why scientists agree that, like any other empirical pursuit, climate research is full of small mistakes. “To insist on absolutely accurate data,” wrote Wendy Hui Kyong Chun, a professor of communication at Simon Fraser University, in a 2015 paper about climate models for Critical Inquiry, would be to ensure that “no measurements and calculations are ever made.”

Satellite image from the U.S. National Oceanic and Atmospheric Administration tracks Tropical Storm Pamela as it approaches Mexico on Oct. 11.NOAA/NESDIS/STAR GOES via AP

Some of Earth’s physical processes are also impossible to simulate on a computer, whereas some processes, such as the interaction of particles in the atmosphere, are too volatile and occur at too small a scale to be effectively represented in a global model. In theory, this problem could be solved by simply adding more computing power. But scientists don’t generally have the necessary data and can’t afford the tremendous cost of building and running every model in maximal detail. As one researcher put it in an article for Carbon Brief, a Britain-based climate literacy website, scientists are always finding “a reasonable compromise” between accuracy and computer processing time. Achieving the greatest possible precision in all climate modelling, in other words, would take so much time and computing power that it wouldn’t be possible at all.

The thousands of measurements gathered by scientists are combined and fed into climate models, computer simulations of the global climate system. Models are built from algorithms that represent chemical and physical processes (for example, laws dictating the flow of energy), allowing scientists to simulate changes in the earth over time. In work like this, which is based on statistical models, there will always be an element of ambiguity: Probabilities cannot tell us exactly what happened or why, but rather what likely happened or could have happened. Scientists can test the soundness of a model by feeding it historical data and asking it to predict present-day conditions: If the model’s results line up with current observations, it’s likely to be reliable. But it’s impossible to bullet-proof any climate model: Climate research is, by definition, about predicting things that have never happened before.

What the statistical models that form the basis of climate attribution allow us to do – and this is no small feat – is to play with variables to see how the likelihood of events changes based on certain conditions, and then make reasonable inferences from that information. Of course, we could wait two decades to see whether today’s predictions turn out to be correct – but at that point, if they were correct, it would be too late to reverse course: The world’s temperature would already have risen by more than 1.5 C compared with preindustrial levels, with catastrophic results.

In the end, scientists simply need to ensure their models’ precision as best as possible and be transparent about limitations, then hope that the world trusts in their results. Scientists have been able to confirm that early models’ predictions about climate change in the 1970s and ‘80s have so far been largely correct. They’ve also issued adjustments to modelling that turned out to be incorrect: Arctic sea ice, for example, has disintegrated much faster than predicted.


Elements gathered from glacier samples are charted at the Byrd Polar and Climate Research Center in Columbus, Ohio, this past January. The centre gathers ice cores from around the world to complete the picture of what the Earth's historical climate looked like.Megan Jelinger/Reuters


The uncertainty of scientific research – and of climate research in particular – becomes a problem only when it is misunderstood. For people who think that science is about compiling lists of certain facts, climate attribution work, with its language of probability, likelihoods, and confidence levels, is somewhere between confusing and suspicious. Skeptics exploit the natural uncertainty of statistical results to call the entire enterprise of climate science into question. If we knew that climate change was real, they seem to be implying, then there would be no need to discuss probabilities at all.

It would make sense for scientists and journalists to work together to counteract such impressions. But I’ve found such an enterprise to be deceptively challenging: Journalists are usually unable (owing to time or budgetary constraints) or unwilling (owing to considerations of “readability,” simplicity, or the like) to communicate the nuances of scientific research that would be required to prevent scientific skeptics and denialists from taking over.

Here’s an example that illustrates the challenges that come with earnestly communicating scientific results, caveats and all. Climate models are produced as part of the Coupled Model Intercomparison Project, an initiative that combines results from scientists around the world. Models are packaged in “generations,” which form the basis for research published by the United Nations’ Intergovernmental Panel on Climate Change (IPCC) over time. CMIP6, the newest generation of models, incorporates 100 new climate models from 49 different countries, including Canada. But, last year, CMIP6 production was stalled, in large part because of conflicting results from several modelling centres.

For the first time, a number of the climate models used by research groups incorporated cloud formation by aerosols, a process that happens at a very small scale and is quite volatile. In theory, this addition should make the models’ climate predictions more accurate than before. But many predictions created by CMIP6 models were initially running exceptionally hot, projecting global warming of at least 5 C where previous models predicted between 2 C and 4.5 C – which meant either the world was warming much faster than scientists thought, or the climate models had veered off course.

Nathan Gillett, the CCCMA researcher, has been working to understand why many CMIP6 models, including his own, are still overheating, and how the problem might be solved. Dr. Gillett and his team have found some interesting tentative answers so far, but he worries about the “best way to communicate” the situation to the public. Some climate scientists are addressing the problem by substituting in observational data for misbehaving parameters in their CMIP6 models, a process known as “model tuning.” But tuning is controversial: It can significantly affect the results and behaviour of a model, sometimes more than might be justified. Even within the academic community, tuning practices are not agreed upon – and they are certainly not advertised to the public, in part because they serve as the perfect bait for climate denialists. As a guest (inaccurately) told Fox News in 2018, when a climate model is tuned, “it’s the scientist, not the science, that’s determining how much it’s going to warm.”

Until the overheating problem and its solutions is better understood, the debate about CMIP6 is staying largely internal; it would be difficult to explain to skeptics the fact that raw results are showing more warming than may actually be the case. Last year, some journalists were already taking the models’ results at face value, reporting that the world is warming faster than previously expected; others were reporting that CMIP6 models are simply wrong. For some online commenters, this only served as more evidence that climate science should never be trusted.


A marine biologist snorkels at Albania's Kallmi beach to see how invasive algae is affecting the Adriatic Sea's ecosystems. Invasive species are becoming a more pervasive problem as changing temperatures allow plants and animals to move into areas that were once uncomfortably cool for them.GENT SHKULLAKU/AFP via Getty Images


The CMIP6 situation is precisely what climate denialists are likely to pounce on: Skeptics seize on experts’ language of uncertainty and “model tuning” to argue that climate scientists are tampering with evidence, creating the illusion of global warming where there is in fact none. But science has never really been about absolute certainty; its purpose is to build on observations to arrive at generalizations and predictions. Contemporary philosophers of science have incorporated this realization into a new definition of scientific knowledge: information that is socially constructed and agreed upon by a diversity of experts.

Bruno Latour, a French philosopher and anthropologist, now professor emeritus at Sciences Po Paris, who is well-known for his research on scientific truth, has described the results of environmental research as being achieved “not by some earth-shaking, fool-proof demonstration but by the weaving together of thousands of tiny facts, reworked through modeling into a tissue of proofs that draw their robustness from the multiplicity of data, each piece of which remains obviously fragile.” Individual facts could turn out to be mistaken, but the shape of the whole would remain intact.

The work of philosophers such as Dr. Latour also has implications for expectations of “neutrality” in science. The orthodox interpretation is that scientists should be guided only by scientific concerns, not social, political or personal-ethical ones: Scientists provide the facts, and others interpret them and make decisions based on them. But Dr. Latour’s work exposes the fact that scientists’ personal values and inclinations inevitably inform their choices about what to study and how to study it, and they are just as vulnerable to error as everyone else.

Dr. Zhang gave voice to a similar idea when I asked him about the ideal of “value-free” science. “Scientists are neutral in the sense that we do science, but we cannot avoid thinking about how our work is perceived and interpreted by the public,” he says. Which scientific questions are asked and how their answers are communicated can make a huge difference to public perception.

Dr. Zhang points out that this is true for any scientific field. Imagine, for instance, a physician who needs to break some news to a patient. “I can present the situation as a 90-per-cent chance of survival, and the cancer patient will likely be very happy. But if I say, ‘I’m very sorry to tell you that there’s a 10-per-cent probability you will be dead in 10 years,’ the patient will probably feel quite differently.”

Medical training can teach the doctor how to make the risk estimate, but it can’t necessarily equip them to decide which presentation of that estimate will be most useful to their patient. Someone who is very pessimistic about their prognosis may benefit from hearing the 90-per-cent formulation (depression can take a toll on health outcomes); someone who is cavalier about their health may need the shock of the 10-per-cent figure to remember to take their medication as prescribed. In each case, the doctor needs to draw on their beliefs and their understanding of the person before them to decide how to proceed.

A doctor and nurse in Bucharest adjust the CPAP mask of a COVID-19 patient this past October.Inquam Photos/Octav Ganea via REUTERS

Scientific research is, by necessity, full of value-laden judgements – not only at the level of communication, but also when scientists choose their topics of investigation, decide how to assess their hypotheses, and determine their level of confidence in their results. Some people interpret this state of affairs as a reason not to trust science at all. But Dr. Latour argues his views should support science, not disqualify it; we simply need to update our crude understanding of what scientific knowledge looks like.

Dr. Latour is not alone in emphasizing the importance and usefulness of acknowledging the values that subtly guide scientific practice. Feminist philosophers of science have long worked to show the biases inherent in certain scientific research programs. (Scientists conducting “objective” research on hormonal differences, for example, produced results showing – inaccurately – that there is a biological basis for women’s lesser mathematical abilities.) It’s when we ignore the inevitable ethical and political dimensions of scientific work that we get into trouble, these philosophers say; once we acknowledge them explicitly, scientists can develop a transparent and better-informed practice.

Following this line of thinking, feminist philosophers of science have formulated the idea of “strong objectivity” (as contrasted with the “weak objectivity” of supposedly value-free work), which is focused on finding consensus among a diverse group of experts who all acknowledge their subjective motivations. If a large and broad assortment of researchers who come at a scientific question from different perspectives all wind up reaching the same conclusion, in other words, that’s a very robust demonstration of its soundness.

Climate science, and the modelling used in attribution science particularly, is an instance of this kind of strong objectivity. Each model used to analyze the climate or a particular extreme event has a complex history, having been built on and improved by hundreds of scientists over the course of years or decades; many share pieces of code that were borrowed and adapted from earlier prototypes. In this sense, every climate model in existence is the product of a massive collaboration – the archetype of consensus-based research. So in the end, the more we know about climate models, including their flawed and complicated lifespans, the more we should come to trust their results.


Back at the Jamtalferner glacier in Austria, the glaciologists take a closer look inside the ice. Scientists say the glaciers of the eastern Alps are already so far gone they will likely vanish completely in the next few decades, but learning about their demise could show what will happen to the rest of the world's glaciers if not enough is done about climate change.Lisi Niesner/Reuters


It’s clear from the event-attribution controversy, I think, that climate scientists’ goals as investigators and their goals as communicators do not always align – and scientists don’t always agree on which ones should take priority. At the heart of this tension is the divide between those people who believe scientists should have a say in political, social and ethical decision-making regarding climate change, and those who think scientists have no role participating in such discussions. When two scientists organized the People’s Climate March in April, 2017, inciting millions of people around the world to promote political action on climate change, other scientists criticized them for abandoning the necessary “neutrality” of their practice. As one dissenting professor of coastal geology wrote in an op-ed at the time, “a march by scientists, while well-intentioned, will serve only to trivialize and politicize the science we care so much about.”

Many others, however, like Dr. Latour, see this presumed divide between climate science and climate activism as a hindrance. “As members of society, we are also responsible,” Dr. Sushama, the McGill researcher, says. In her mind, a climate scientist’s job is not only to provide facts, but also to ensure people understand and respond to them properly, particularly when such responses could dramatically change the future of the planet. Although politically informed research used to be unthinkable by scientists who insisted on the value-neutrality of their and their colleagues’ work, the urgency of the climate crisis has shifted that assumption. And increasingly, we’re able to see how public discourse has influenced the kinds of questions researchers ask and how they go about communicating their results. (A growing number of scientists, for example, are using words such as disaster, crisis or emergency to describe climate change in their published work.)

Developing effective, science-informed responses to the climate crisis, then, might require a cultural acceptance of the blurred line between scientific research and communication, as exemplified by event-attribution work. It will also require us to develop a better relationship with the uncertainty that undergirds all scientific work. As the authors of Discerning Experts explained, when difficult policy decisions need to be made, “uncertainty is often used as a reason not to act.” Yet when it comes to climate change, researchers may never be able to provide us with the clarity we’ve come to mistakenly expect from scientific endeavours; it may be that the best information we’ll ever have about the future will be in the form of probabilities. Climate models can’t tell us why a certain event took place any more than a doctor can tell us with confidence why our head hurts, and they can only predict the future based on models, just as a doctor can list off survival rates for various illnesses and courses of treatment. Yet when we’re sitting in the hospital emergency room experiencing serious symptoms, few people would suggest ignoring the doctor’s recommendations.

“The COVID pandemic was bad in many ways,” Dr. Sushama says, “but it also helped to create awareness” of the inherent uncertainties in scientific research. In following news of the pandemic, non-scientists become more exposed to the usefulness of computer models (which were employed early on to demonstrate the value of “flattening the curve” of infections) and the need to take action based on probabilities. In media releases, public-health experts cite a concept called the precautionary principle, which states that if the consequences of some events would be devastating, then we should act to prevent them, even if their likelihood is unknown.

This principle could be said to have informed lockdown decisions around the world over the past year. It’s also the line of thinking most scientists seem to follow about the climate crisis. As Dr. Zhang puts it, the pandemic and climate change both require us to make decisions based on risks rather than certainties – but whereas the risks associated with the pandemic are tangible and immediate, those associated with climate change are “long-term, and the consequences are much greater.” COVID reminded us that “we always make decisions based on uncertainty”; acting to mitigate climate change should feel no different.


A student holds a poster at St Convals Primary Glasgow while learning about climate change on Oct. 19, 2021 COP26: Everything you need to know

From Oct. 31 to Nov. 12, nations meeting in Glasgow will set the course of the global response to a worsening crisis. The Globe’s Kathryn Blaze Baum and Jeffrey Jones explain what to expect

Read more

Your Globe

Build your personal news feed

Follow topics related to this article:

Check Following for new articles