Skip to main content
opinion

Chris Clearfield is the founder of System Logic, an independent research and consulting firm. Andras Tilcsik holds the Canada Research Chair in Strategy, Organizations and Society at the University of Toronto’s Rotman School of Management. They are the authors of Meltdown: Why Our Systems Fail and What We Can Do About It.

The proliferation of internet-connected devices – from smart thermostats to fridges that automatically order milk for us – promises a new era of convenience. But such devices have also led to a spate of security failures: easy-to-hack baby monitors, connected TVs that spy on us and smart fridges that allow hackers to steal our e-mails. In a stunning demonstration in 2015, security researchers remotely hacked a car while it was speeding down the highway. They used the infotainment system as a foothold to access the car’s built-in computers, which controlled everything from the windshield wipers to the steering wheel.

But there’s more to the problem than hackers. Smart devices can just as easily wreak havoc on their own. In the winter of 2016, a smart thermostat went haywire and left thousands of people without heat; a few weeks later, a similar device turned homes into steam baths. Other glitches turned off smoke detectors and disabled a wireless hub that managed nearly every feature of smart homes, from the lights to the locks. And in February, a software bug knocked out the same infotainment system that the car hackers had exploited in 2015, a system that controls not only the radio but also the air conditioning, navigation devices, rear-view cameras and even the SOS feature.

The results of these failures – sweltering homes, disabled smoke detectors, inoperable locks and compromised SOS systems – paint a grim picture and it’s not hard to imagine even worse scenarios.

Connected devices are supposed to simplify our lives. But in reality, they add complexity, creating new paths for both hackers and software bugs to do harm not only in virtual spaces but in the physical world, too.

To be sure, cars, thermostats and refrigerators have broken down as long as they have existed. But when we turn them into smart devices, the nature of risk changes. When we connect previously offline machines to the internet and to one another, we build intricate webs in which problems can quickly spread beyond where they started. And we set ourselves up for meltdowns that can affect tens of thousands of users simultaneously.

Consider how the above-mentioned hacker attack and software bug affected cars. In both cases, problems started in a seemingly innocuous component – the infotainment system – but the connections between different parts meant that hackers could jump from one system to the next until they gained full control, and that a single malfunctioning part could compromise many systems. What makes connected devices attractive to hackers also makes them vulnerable to unexpected meltdowns.

Even just a decade ago, our car radio wasn’t a gateway for criminals, and even if it broke, everything else in our car still worked. We live in a different world now.

Worse, the scale of problems can be much larger with smart devices. When a physical car part breaks, it usually affects one car at a time; when the software breaks, it can affect every car running on the same platform. And the same goes for all sorts of smart devices.

This doesn’t mean that we should abandon connected devices. But we do need to be smarter about our smart gadgets. We can’t just focus on hackers looking to steal our information in cyberspace; we also need to worry about smart devices accidentally causing trouble in the real world.

As customers, we must consider the cost of inviting connected technologies into our lives – not only the cost of a device itself but also the cost of its vulnerabilities. A minor convenience isn’t always worth the risk.

We also need to demand more of the companies that design these devices. We value safety and security in traditional products; we should bring the same attitude to smart machines, recognizing that they can fail in many more ways than their unconnected cousins. It’s imperative that we advocate for digital safety in the same way that consumer groups in the 1960s and 70s advocated for automotive safety.

Ultimately, the responsibility lies with the companies that develop smart technologies. Too often, they rush connected gadgets to the market while still depending on a safety model designed for an offline world. Instead, they need to begin with the assumption that smart machines will fail and build in safety features from the start. Physical controls that can easily override rogue devices and fail-safe features that mitigate the fallout from a system collapse are a good place to start, even if they result in less sleek designs.

Smart-device makers should also learn from the successes and failures of industries that routinely manage complex technologies, such as commercial aviation and the nuclear industry. Research in such fields has revealed several solutions that help organizations tame risky technologies: cultivating a culture in which everyone feels safe to speak up about safety concerns, developing systems to learn from small mistakes and near misses, and bringing in skeptical outsiders to find out where trouble is brewing in a system. These initiatives aren’t rocket science, but putting them into practice requires a radically different safety culture than what prevails in many companies.

The transformation of our simple offline devices into complex, connected machines is nothing short of revolutionary. But our safeguards lag behind. Although we shouldn’t whip up a panic over the risks, we can’t ignore them either. There is something different and disturbing about our age, when even a small glitch can impair our cars or cause mayhem in our homes.

Interact with The Globe