Linda Besner’s most recent book is Feel Happier in Nine Seconds.
Near midnight on July 7, 2017, one of the worst disasters in aviation history – in which Air Canada Flight 759, carrying 135 passengers and five crew, missed its runway on the descent into San Francisco International Airport and crashed into four passenger planes awaiting takeoff, killing or injuring more than 1,000 people – came within four metres of happening.
As the Air Canada plane approached, the pilot of one of the aircrafts on the ground radioed the flight-control tower. “Where is this guy going?” he asked, of the plane barrelling towards him. The crew of a Philippines Airlines flight, also on the ground, turned on its lights to signal its presence.
The pilot of Flight 759 abruptly changed course, even as air-traffic control came over the radio with the order to abort the landing and climb. The plane swooped low over the crafts on the ground before regaining altitude. No one was hurt; no equipment was damaged.
What actually happened was, statistically speaking, nothing. We know about the incident because, since the 1970s, the aviation industry has developed a particular expertise in reporting things that don’t happen.
In most areas of life, what we know suffers from a bias toward the actual. In gathering and assessing information about the world around us, most of us focus narrowly on things that exist and events that have come to pass.
Near misses are the events of shadow worlds that run alongside this one. Exploration of their import has largely been the domain of science fiction, or counterfactual history – the slew of literary and filmic depictions (Fatherland, The Man in the High Castle) that start with the question, “What if the Nazis had won the war?” Or Michael Chabon’s 2007 novel, The Yiddish Policemen’s Union, that asks, “What if a Jewish state had been founded in Alaska?”
But counterfactual history has itself been gaining some acceptance as a legitimate learning tool within university departments, and the past decade has seen rising interest in the study of near misses in diverse fields. In the United States, the aviation industry’s near-miss reporting system has served as a model for NASA, while firefighting and nursing associations have started collecting their own information on near misses.
As the idea of the near miss gains currency, it’s possible to imagine the study of the almost-was expanding into any and all sectors. All of life is a branching off of one option after another, the major and minor often indistinctly marked. Everything is almost something else.
Greater attention to the events that don’t quite occur might unlock their significance for individuals, institutions, companies – even for democracy.
From a mathematical standpoint, things that don’t happen have a significant advantage over things that do: There are far more of them.
A disaster with a long history, which has been much studied, is maternal morbidity – death in childbirth. In past centuries, doctors or reformers in Europe and North America who wanted to study the phenomenon had no shortage of cases to learn from, since labour was frequently fatal. But as sanitation and other factors improved, the number of women dying in childbirth declined – in Canada, the most recent numbers show seven deaths for every 100,000 live births. This, of course, is the desired outcome – but it means the number of cases that can be studied in order to keep every woman delivering a baby in the Canadian context alive are vanishingly few. To try to prevent those seven deaths, researchers from associations such as the Maternal Health Study Group of the Canadian Perinatal Surveillance System now recommend casting a net wide enough to capture maternal near-miss – all the deliveries in which the mother’s life was threatened by something going wrong.
Things are always going wrong. In 2012, the Process Improvement Institute, a safety-management consulting firm based in Tennessee, put out a paper on near misses in the chemical-processing industry. The authors found that for every accident that occurs, there are about a hundred near misses, as well as 10,000 errors and failures – either human mistakes or flaws in the system. A wealth of information is theoretically available.
But gathering data on things that didn’t happen poses practical as well as metaphysical challenges.
Both health-care and factory workers are officially encouraged to report errors or mishaps that put someone in danger. But an error isn’t just useful information – it’s the kind of thing that gets someone fired. Companies are fond of near-miss reports because they are far cheaper than lawsuits: If no one was actually hurt, no one can sue. But for workers, the risk of job loss looms. It can be hard to incentivize people to offer up information about their own mistakes or failures to adhere to protocol. Even if the error is someone else’s, it’s not always comfortable (or safe) to snitch.
Also, people’s sensitivity to near miss events can be patchy. I spoke with Garry Gray, a sociologist at the University of Victoria, who has been studying near misses for years. In recording safety data in factory settings, he told me, “You start to notice these near misses happening all the time.”
Most safety systems that use near-miss numbers rely on self-reporting – people either filling out a questionnaire or telling an inspector about incidents in which they came close to being injured or killed. Dr. Gray told me that, as with anything to do with how people remember events, these reports are likely to be coloured by the emotions attached to the memories. Because dramatic incidents stick in the mind, the majority of reported incidents are of vehicles almost crashing, or heavy pipes almost cracking someone on the head. People are less likely to remember a more humble near miss: that time they tripped over a loose cable, or fell down but brushed themselves off and carried on working.
In a 2018 article published in The Canadian Journal of Sociology, Dr. Gray remarked that self-reporting in factories also fails to capture the near misses that weren’t noticed at all – sometimes the person never knew they were in danger. A superior method of gathering this information might be to post observers at hotspots, where they could see for themselves the frequency of near-accidents. If a non-event is usually recorded as 0 and an event as 1, near misses could even have their own designation in statistical analysis: 0.9.
As Dr. Gray and I spoke, I remembered a recent near miss of my own. I was cycling along a bike lane on a busy street in Toronto when I felt rather than saw a 16-foot truck coming too close to me. Looking over, I saw the truck’s cab start to turn in my direction, heading across me into the driveway of a business. I braked sharply and, as the long body of the truck swerved into my lane, scrambled backward onto the sidewalk. At the last minute, the driver saw me, and with the truck still running in the driveway, three men jumped down from the cab to make sure I was all right. I was unhurt, but the shock of what had almost happened reverberated around me for the rest of the day.
No record of my near death – or my reasons for subsequently avoiding this street, even though it has a bike lane – exists in any official document. But, as it turns out, urbanists have been working on making it easier to access and report these incidents.
In 2014, researchers from the University of Victoria and Simon Fraser created BikeMaps, an online project that allows cyclists to enter their own reports of collisions and near misses. During the first two months of the pilot in Victoria, the crowdsourcing site collected as much data as the city’s official cycling-collision reports gathered in a year – near misses constituted 62 per cent of the new information.
Looking at the site now, I’m struck by how dangerous it makes cycling seem. All those bar graphs in alarming reds and oranges, the clickable icons where irate cyclists have shared stories (and sometimes licence plates) of drivers who squeezed them off the road, clipped them with a mirror, or came a hair’s breadth from crushing them to death. Being alive is dangerous; to get into the driver’s seat of a car also requires us to temporarily suspend the reasonable fear of hurting ourselves or someone else. Awareness of the near miss is a mixed blessing: thinking too vividly about the ever-present potential for disaster can be paralyzing.
Then again, the threat of disaster can galvanize – on a global as well as an individual scale. The present long political moment has been widely described as a global tilt away from democratic ideals. Since 1972, the U.S.-based think tank Freedom House has issued a yearly report on the state of democracy, and recent news has not been good: The 2018 index found a worldwide decline every year since 2005. As various countries around the world have elected leaders who seem scornful of democratic procedures, it can feel as though the whole world is speeding headlong into a dangerous bend in the road.
In the midst of a flurry of new research on what the death knells of democracy sound like, researchers at the University of Chicago wondered: Why not look at cases in which a country experienced a serious erosion in the quality of democratic functioning, but then righted itself? The resulting paper, authored by Tom Ginsburg and Aziz Huq and published in the Journal of Democracy in 2018, works toward elaborating a method for identifying and studying “the democratic near miss.”
If the subjective elements of individual memory make it hard to document slips, trips and falls in a factory setting, the criteria for what constitutes democratic decline and recovery are even more malleable. For their first attempt, Dr. Huq and Dr. Ginsburg compared data from three widely used indices: those produced by Freedom House, the Polity Data Series and the Varieties of Democracy collective. The researchers looked for a particular pattern: countries that enjoyed a stable democracy for at least three years, then experienced a significant dip of at least a year, which was then followed by another three years of solid democratic functioning. All indices showed countries that followed this pattern, but none agreed on which countries they were. India in the 1970s was one contender; arguments could be made for Bolivia and Guyana. Fiji seemed to be more a case of “democratic careening,” in which a state flickers wildly back and forth from one form of government to another.
Concluding that the tools for studying democratic near misses had not yet advanced to a point where quantitative comparisons were possible, the researchers instead chose three case studies from existing literature: Finland between the world wars; Colombia in 2010; and Sri Lanka in 2015. All three experienced a sustained moment of turmoil, in which authoritarian figures attempted to override existing structures to consolidate their control. In all three cases, Dr. Ginsburg and Dr. Huq came to the paradoxical conclusion that the defence of democracy came largely at the hands of non-elected actors: judges, civil servants, military officials and political elites.
I wondered if this wasn’t a little depressing – did popular dissent play no role at all? These findings, Dr. Huq told me, were not to say that citizens had no power to protect their institutions. Citizen participation affected these outcomes both directly and indirectly – in the case of Sri Lanka, civil-society groups united to push for the election of Maithripala Sirisena, the challenger to autocrat Mahinda Rajapaksa. In Colombia, the high regard in which the public held the Constitutional Court seems to have helped persuade Alvaro Uribe to accede to the Court’s refusal to extend his time in office.
In studying what doesn’t happen, one runs into some of the same difficulties as in studying what does. Cause and effect are never so clearly related as convenience would have it, and the reasons why one country slides into autocratic rule and another recovers may have more to do with chance, or a specific political culture as with safeguards that can be replicated elsewhere. In building a bulwark against democratic decline, these illustrations can, however, hint at the complex mechanisms that sink societies into dangerous waters, and, unexpectedly, float them out again.
The spectacular near miss of Flight 759 served to highlight a serious flaw in Canada’s air-traffic-safety regime. On the plane’s descent into San Francisco, the pilot was suffering from fatigue – an air-safety investigator later told media that mandated rest times for Canadian crews were among the most meagre in the world. At the time of the near crash, new regulations were already in the works: Canada now allows only nine to 13 consecutive hours of work in a 24-hour period (down from 14 hours).
However, the explanatory potential of near misses doesn’t guarantee solutions. The Air Canada Pilots Association declared its members’ disappointment with the new rules, saying that Canadian pilots are still expected to perform overnight flights two hours longer than NASA recommends. How much risk is too much? The world that doesn’t come into being is as open to poor interpretation as the one that does.