In early March, an assistant at a major Toronto law firm received an urgent email from what appeared to be a partner's Gmail account. His cell was broken, he wrote, so he was using someone else's computer to contact her. He was on his way to court and desperately needed the assistant to print some documents and courier them, stat, to the courtroom. The documents, he continued, were attached to the email. The assistant dutifully opened the attachments.
Then the firm's entire network crashed, and its data—almost 9,000 client records—vanished.
Not long after, the firm—let's call it Luckless, Patsy & Chump—received another email. This one, from a person nobody at the firm recognized, said the data would be returned only if Luckless, Patsy & Chump paid a ransom of $15,000 in bitcoins.
It wasn't a ton of money, but ransomware attackers don't always return data even after a ransom is paid. Even worse, bitcoin transactions, if improperly handled, can allow hackers to create a back door through which they can attack a network again.
So Luckless, Patsy & Chump called the Toronto boutique cybersecurity firm Cytelligence, a specialist in data breaches. The company offers a suite of services, but in this case, it acted, in Cytelligence CEO Daniel Tobok's words, as "hostage negotiator." The company brokered the bitcoin payment, but held the money in escrow so it could ensure it got back the law firm's data. Sometimes, attackers will siphon out data and haul it away; other times, they'll simply encrypt the data and then make their victim pay for a key to decrypt it. Cytelligence made sure the key actually worked before it handed over the cash. The attacker, Tobok says, was most likely organized crime based in South Africa. "When we talk to people," he says, referring to ransom attacks like this, "they're like, 'Oh, that just happens in the movies.' No, it doesn't just happen in the movies. It happens every day."
No kidding. While military, intelligence and government officials have been warning about the seriousness of cybersecurity breaches for many years—the United States spent $31.5 billion (U.S.) on related tools and services in 2016—attacks and breaches now occur more frequently than Netflix releases new TV shows. In the six weeks I spent researching this story, both the Canada Revenue Agency and Statistics Canada temporarily took their websites offline after vulnerabilities were exposed (Statscan was actually breached, though the site was pre-emptively shut down before any damage was done); a 22-year-old Canadian was arrested for his alleged involvement in the massive 2014 Yahoo hack (in which half a billion email accounts were stolen); McDonald's Canada revealed its job website had been hacked and the files of some 95,000 applicants compromised; and Britain's National Health Service reported that it suffered 55 different cyberattacks in 2016. And those are just a few of the incidents that were publicized. Meanwhile, as news of these incidents broke, emerging details about Russia's hack of the Democratic National Committee's computer networks continued to provide a vexatious background hum.
The effects of such attacks are multifarious and, by now, chillingly familiar: They can create large-scale embarrassment (the Ashley Madison and Sony hacks), effect massive fraud (last October, hackers took over a Brazilian bank's entire system for several hours, creating dummy websites to which customers unwittingly fed their financial details), endanger entire nation-states (Russian cyberattacks plunged parts of Ukraine into blackouts just before last Christmas), and wreak political havoc (did we mention Trump and Russia?). They also cost a lot of money. According to an April report produced by the Canadian Chamber of Commerce, "Cyber Security in Canada," cybercrime is costing Canadian businesses over $3 billion a year, with that money being spent on detection, containment, remediation, fines and lost business. Both that report and a 2016 study by IBM and the Michigan-based information-security firm Ponemon Institute place the average cost of a data breach for Canadian companies at just over $6 million.
In issuing the report, Chamber of Commerce CEO Perrin Beatty pointed out that many businesses focus on recovery rather than prevention. But, of course, it's much better not to get hacked in the first place. While most information-security specialists say it's not a question of if you'll get hacked but rather when—Tobok says "there's no such thing as a status of 100% secure"—there are numerous precautions businesses can and should be taking to protect themselves from cyberattacks. Foremost among these is a cyberdefence procedure known as a penetration test. As the name suggests, a pen test is a rigorous, comprehensive investigation of a computer network, performed by an independent third party, in order to make sure the network cannot be penetrated. (It's also sometimes referred to as ethical hacking.) At the most basic level, it's like making sure all the doors and windows of your house are locked.
But too many companies forget about an upstairs window, or their deadbolts have rusted, or they've inadvertently invited people into their home and don't even know they're there. More worrisome, a lot of companies don't want to invest in an adequate security system at all. Penetration tests are considered a luxury, and many companies, concerned only with compliance checklists, will settle instead for a cursory vulnerability assessment.
"The problem is that most people think the risk is overblown," says Ottawa-based information-security expert Adrien de Beaupré, who has been in the business for more than 20 years. He's worked for Bell, where he tried to break into the Vancouver Olympic Games' networks, and teaches pen testing through the SANS Institute, the renowned American computer security organization. "They say, 'You cyberguys are always saying the sky is falling.' Which, legitimately, is quite true—we have always been saying the sky is falling."
But now the sky is falling, I say.
De Beaupré laughs mirthlessly: "The sky has fallen several times."
Before launching Cytelligence a year ago, Tobok ran Digital Wyzdom—which by 2013, he says, was the largest private forensic company in Canada. After Telus bought it out, Tobok spent three years running Telus's national security team before striking out again on his own. Cytelligence bills itself as "the elite force of global cybersecurity" and, accordingly, Tobok has stacked it with seasoned ex-cops and former military personnel—among them, Gene McLean, a member of the Security Intelligence Review Committee (the CSIS watchdog); and Nicholas Scheurkogel, former head of cyberintelligence capabilities at the Department of National Defence. The company's clients have included Bombardier, Cisco and the Bank of Canada.
When I first visited Cytelligence's offices at Yonge and Eglinton in early spring, its 20-some investigators had been working around the clock to repair the damage caused by six separate data breaches across the country. (The company usually does between 25 and 30 investigations a month.) But aside from a pizza box in the boardroom, the eerie silence (most staff had gone home to rest) and the fact that the company's exhausted VP of forensics, a former detective sergeant with the OPP's high-tech crime unit named Bernard Miedema, had traded in his customary suit for an open-collar shirt and chinos, there was little evidence of the day's labour. The office was impeccably clean, quiet and orderly, an almost nondescript study in dove grey and frosted glass. Even the forensic lab—the nerve centre of the operation, equipped with a dozen computers (none of them connected to the Internet) and littered with Cellebrite mobile forensic devices, write blockers (hard drives, essentially) covered with yellow evidence stickers, and other expensive, arcane gadgets—had a serene ambience that was more Mister Rogers than Mr. Robot. To the untrained eye, only the biometric handprint lock that kept the room secure and a ceiling-mounted siren light, which glows orange when visitors are around, suggested the room was anything more than an IT-department supply closet.
It was all a bit of misdirection, however. Just like Cytelligence's chief pen tester himself. Bryan Zarnett, the company's managing director of security consulting, is 47 and has the low-key, accommodating demeanour of the suburban martial arts instructor he is in his spare time. (The outfit too: today, a hoodie emblazoned with the logo from the kung-fu classic Five Deadly Venoms, dad jeans, Fitbit clasped around his left wrist.) But Zarnett's been playing with computers his whole life: His father taught computer science at Ryerson, where Zarnett also studied, and had Zarnett tinkering with mainframes at the age of 12. Before arriving at Cytelligence, he led security teams at Nortel and Telus.
As a pen tester, Zarnett uncovers vulnerabilities in networks and then deliberately tries to exploit them. In industry vernacular, exploit is also a noun—an exploit can be just a tiny crumb of code inserted in the right place, or an elaborate con that preys on common human frailties (more on that later). Zarnett is what's known as a "white hat hacker"—a good guy who spends his days pretending to be a bad guy ("black hat hacker," of course) so that companies and organizations can fix problems before the real bad guys discover or take advantage of them. Bad guys—be they rogue hackers, nefarious competitors or run-of-the-mill crime syndicates—are constantly coming up with new exploits, and people like Zarnett are constantly trying to keep one step ahead. Or at least not fall too far behind.
Zarnett is in charge of several things at Cytelligence: among them vulnerability assessments, code reviews, security audits and penetration testing. Real penetration testing, Tobok insists. "It's like going to the doctor. He looks at you, maybe takes some blood tests, and that's it, you're done. But you go to a specialist, he holds you by the balls and asks you to cough. He targets a potential area. That's what a real pen test is—a targeted assessment."
A pen test typically begins with reconnaissance to assess a web application or phone for vulnerabilities or weaknesses, the most common of which are unpatched (that is, outdated, non-upgraded or improperly configured) software and systems. Then the testers begin to exploit those vulnerabilities and see how far they can go. Depending on a client's budget and what exactly they want targeted, the pen tester will try to hack into a network using digital, physical or emotional means. They'll try to bypass security mechanisms to gain administrative access, obtain confidential data or destroy records. They'll send spear-phishing emails like the one that took out the law firm. As Tobok says, companies will often request a targeted pen test, depending on their industry and what they regard as their worst nightmare—a non-profit, say, might be afraid of someone gaining access to their donor servers, or a mining company might want to see if the cameras at their diamond mines could be hijacked. The ideal pen tester is equal parts IT grunt, private eye, con artist and behavioural psychologist.
The actual details of the test are generally mundane and technical, but the findings—were they detected by a malevolent hacker before Cytelligence took a look around—could be catastrophic. Zarnett recalled a pen test conducted for a financial institution that began with the team trying to figure out which services were running on which systems. "As we were trying to figure that out, we took down the whole network cluster. The system just went bang. We found out there was a patch they were supposed to do on their firewall six months ago, and our little bit had caused the whole thing to self-destruct. We took them out for three hours."
Pen testers will also try out hacks in physical space. Zarnett recounted another incident where he made an appointment with a company and, as he was sitting in the waiting room, noticed an untended network plug. He stuck in a tiny device with its own WiFi signal—it looked a bit like a USB drive—and waited for five minutes to see if anyone noticed. No one did. "Then I left," he says. "No one is going to see my little device under the chair, and outside the door I take out my small laptop with all my hacking tools on it, and I'm surfing their system for the next four hours. I can scan the network, see what the traffic is, build up a plan of attack."
A full pen test can take days or weeks, depending on the size of a company's networks and the scope of the test. They can cost anywhere from $10,000 to a few hundred thousand dollars; Cytelligence's average is around $25,000. Not surprisingly, Tobok wouldn't let me sit in on a test with an actual client—"Our business depends entirely on secrecy," he says—but Zarnett walked me through the basics. He opened a MacBook Air and loaded a web application called Gruyere. Gruyere is basically a simulator, a CodeLab program that lets hackers experiment with exploits in a safe environment; you're not on an actual client's network and any damage you do can be undone with the click of a button. It looks like a real, quite basic, website with various fields that allow users to input any kind of data, create logins, test passwords. Zarnett then started playing with the fields, trying out passwords and uploading files to see how the site would react.
As he did this, using just his Android phone, an open-source pen-testing platform called NetHunter, and the security scanner Nmap, within seconds he was able to determine where the site was located, who was hosting it, what applications and services were running on it, and on which ports. Another tool, with the ungainly name of Burp Suite, allowed him to monitor all the transactions that were going on in real time.
"So, we know they have these technologies on these ports," Zarnett says. "What are the common vulnerabilities? What are the ways we can attack it? So what we do is build up a map and a plan—a threat model." After this reconnaissance, the pen tester roams elsewhere, systematically scooping up all the information he or she can—details about employees, how they interact, what system software updates are in place, what kind of rout-ers they use. "In a lot of cases, it's as easy as this," Zarnett says, poking around Gruyere. "Oh, there's an upload feature—let me see what I can type in there and change."
For the most part, the inner workings of penetration testing, with its thicket of acronyms and mongrel vernacular (a mashup of tech-speak, military lingo and security jargon), are impenetrable to the layperson—and about as exciting to watch as a repairman fixing your dishwasher. But one of its most compelling, and insidious, facets is what information-security experts call "social engineering." Just as Zarnett used his hacking tools to harvest digital information about a company's potential weaknesses, a social engineering hacker would use the Internet (or even just a phone) to gather as much information as possible about a human target. That could be a system administrator or someone in reception or an unwitting CEO. Scouring that target's social feeds, their LinkedIn profile, blogs they may have written, a hacker could then use this map to craft spear-phishing emails that would exploit this information.
Kevvie Fowler, a cyber partner with KPMG (which, like Cytelligence, helps prepare for and manage cyberincidents), describes an exemplary situation: "Let's say they have kids in soccer," he says. "You might send a message to the individual saying, 'Hey, I saw your kid in soccer practice last week. Here's a picture I took—I wanted to make sure you had it.'" If that comes to the parent, they'll definitely click on the picture. That picture could have a link to a malicious file or a site that would download some sort of back door and allow someone in."
Fowler, another Telus alum who is also a SANS "lethal forensicator," says most attacks now have some element of social engineering or exploit human vulnerabilities. It might even be as simple as charming HR long enough to hand over the names of a company's accountants. Hackers often target third-party vendors and service providers, like accountants or law firms, whose security measures might be less stringent, in order to gain access to, say, a well-fortified big bank or telecom. (Target Corp.'s massive 2013 breach occurred after network credentials were stolen from the supplier of the company's Internet-connected HVAC system.) Zarnett recalls one social experiment where one of his team members was able to get behind a receptionist's desk, then on to her computer, and then actually pose for a picture with the smiling receptionist standing beside him (a trophy to prove how easily access had been gained).
De Beaupré, the Ottawa consultant, argues that even the Democratic National Committee hack was itself an example of such fraud. "Did the Russians influence the election?" he says. "It doesn't matter whether they did it through technology or not. They social-engineered the entire country into believing that Russia influenced the election. Therefore, Russia did influence the election."
A couple of years ago, Public Safety Canada issued a list of four mitigation strategies that, the agency said, could cut cyberattacks by as much as 85%. Chief among them was patching applications and operating system vulnerabilities. But too often, companies order a pen test and then fail to plug the holes discovered during the test; they may as well have blown that money on extra booze at the annual holiday party.
"The remediation of problems is a large-scale issue," Zarnett says. "Some people just don't see the value of it. They'll do the assessment. The fixing, people don't necessarily care about—regardless of the number of data breaches." Zarnett told me he still sees things like Heartbleed—a security flaw publicly disclosed in 2014 that compromised sites including Google and Yahoo—even though they're easily patched. And after repeatedly being warned, other organizations continue to use outmoded, insecure technologies like file-transer protocol, or FTP, servers. (In March, the FBI's cyberdivision specifically cautioned the health-care industry, a particularly vulnerable sector, that criminals were actively targeting such servers.) "People are living under a false sense of security," Tobok says. "I'm not saying everybody needs to be paranoid and we all have to get AK-47s, but you have to understand the attack vectors. It's a risk." As part of their pen tests, depending again on the scope ordered by a client, Cytelligence and other cybersecurity firms can do the necessary patching or help train employees to recognize social engineering hacks.
But an equally pernicious problem is that organizations commonly limit the scope of the test. Too often, in de Beaupré's estimation, companies are reluctant to investigate their own employees, even though the majority of attackers, he says, have some inside access. The most notorious of these, he says, are Edward Snowden and Jeffrey Delisle, the Canadian naval intelligence officer who sold top-secret military information to Russia.
"Attackers do use insiders," he tells me, "either coercively or not. And companies don't want to know that. 'We love our people. How could they possibly attack us?' So inside attacks are usually out of scope. Social engineering is usually out of scope. Clients that attack are usually out of scope. So the most common forms of attack that attackers use are usually out of scope."
To de Beaupré, who once worked for Public Safety himself, such thinking is characteristic of Canada's lax attitude toward cybersecurity as a whole: "The Americans are realizing, okay, we're being hit, we have to start spending money on this stuff. And Canadians? Their heads are up their asses." De Beaupré's greatest concern is our critical infrastructure (CI)—energy systems, banking, transportation. While there have been few publicly known attacks on Canadian CI so far—unless you count data breaches of the federal government—Public Safety has long warned of how vulnerable those systems are, and still no one, de Beaupré says, has done the hard work of hardening them. "There's a complete lack of leadership from the government," he says. "It took them five years to get out their Cyber Security Strategy. Okay, what did they do with it? Where's the plan? Where's the budget? What have they done? Diddly."
Well, yes and no. The feds are now in the final stages of enacting legislation that will force all Canadian companies to immediately report all system breaches, what information was lost and how the attacker gained access. (That information will go to the Office of the Privacy Commissioner of Canada, which will then decide if the information should be public.) Businesses that fail to report breaches could be fined as much as $100,000—enough of a penalty to nudge some companies along. As well, Public Safety Canada has joined the Canadian Cyber Threat Exchange, a non-profit group of businesses and institutions that began operating in February. Founded by, among others, Air Canada, Bell, Hydro One Networks and Royal Bank, the CCTX is set up as a forum for companies to anonymously share cyberthreat information. It's still small—as of this writing, 14 organizations had signed up, with another 13 in the process—and not cheap ($50,000 a year for firms with more than 500 employees, with reduced rates for smaller players), but it's at least an acknowledgment that cyberdefence needs to be taken more seriously.
With new players entering the sector seemingly every week—KPMG, Deloitte, PwC, and BlackBerry all offer a range of cybersecurity services, in addition to boutique shops like Cytelligence—businesses shouldn't wait for the government to catch up. "If you determine that you are at risk, and then go through mitigation and put security mechanisms in place," says de Beaupré, "that's at least 10 times cheaper than recovering from a breach. Everybody knows you should take your car to the shop and have the oil changed periodically. But no one seems to know they should do regular preventative maintenance on their computer. They treat it like a toy. But it's not a toy; it's a tool."
The next day, however, at another company, they would. And then, again, at another one. Zarnett isn't complaining—it means more work. "But to be honest," he says, "it drives me crazy. No matter what the news is, people still aren't getting it. Until it hits them. You don't have to be paranoid, but you should be prepared."
A cocktail-party guide to hacking jargon
_Brute force A type of attack that involves trying every possible means of gaining access one by one.
_Dark web Internet content not indexed by standard search engines like Google, and accessible only using networks like Tor. Often used by website operators who want to remain anonymous.
_Denial-of-service attack Flooding a target (most often a website) with spam, viruses or requests for data, with the aim of slowing or shutting down the site.
_Exploit A means of taking advantage of a bug or weakness (called a vulnerability) in a computer or piece of software.
_Infosec Short for information security, it's the term insiders prefer over cybersecurity.
_Keylogger Software or hardware that secretly tracks keystrokes to gather information.
_Malware Any kind of malicious code or software intended to damage or disable a computer system.
_Penetration test A simulated (and authorized) attack on a computer system to look for security weaknesses.
_Phishing Soliciting confidential information—personal data, credit card numbers and the like—from an individual or organization by mimicking a trusted source.
_Ransomware Malware that locks a computer so you can't access files. Victims must pay a ransom (often in bitcoin) to recover the files.
_Social engineering Manipulating people—through lying, blackmail, impersonation and other trickery—to gain access to a system.
_Rootkit Tools used to surreptitiously gain administrator-level access to a computer or network, allowing the infiltrator to do whatever they want to the compromised system.
_Zero-day attack When an attacker exploits a previously unknown or undisclosed software vulnerability—as in, the software's maker has zero days to create a patch or workaround.