Skip to main content

With the rise of cloud-based apps and the proliferation of mobile devices, information security is becoming a top priority for both the IT department and the C-Suite. Organizations enthusiastic about the Internet of Things (IoT) are equally guarded as global cyberattacks continue to dominate headlines.

Businesses ranging from startups to large corporations are increasingly looking to new technologies, like artificial intelligence (AI) and machine learning, to protect their consumers. For cybersecurity, AI can analyze vast amounts of data and help cybersecurity professionals identify more threats than would be possible if left to do it manually. But the same technology that can improve corporate defences can also be used to attack them.

Aleksander Essex and Ryan Wilson joined us for a live discussion on the impact of artificial intelligence on cybersecurity. Essex is the head of Whisper Lab, a cybersecurity research group at Western University, and an assistant professor of software engineering with a specialty in cryptography. Wilson is the chief technology officer for Scalar Decisions, a Toronto-based IT company involved in building critical systems for hospitals, banks and other institutions.

Q: Why is artificial intelligence becoming more popular in the cybersecurity space?

Alek: Cybersecurity is an immensely difficult challenge, and any opportunity to gain an edge, on either side, is something to look at.

Q: What problem can machine learning and artificial intelligence help solve for cybersecurity?

Ryan: AI has a tremendous amount of promise to the security community and something we need to leverage more. A couple of key examples are: the proliferation of malware continues to grow at an alarming rate. Humanity simply can't keep pace with the amount of malware being produced at rates of over 100,000 unique pieces of malware a day.

Traditional methods would rely on humans to build signatures for each piece of malware (i.e. anti-virus - which doesn't scale or work at this level). AI, however, and machine learning can be taught to detect good from bad and eliminate this manual need for signatures entirely. AI can provide an effective way to stop advanced and sophisticated malware attacks that have never been seen before.

The largest challenge that organizations have today is keeping pace with attackers and finding staff members that are talented in the domain of cyber security. AI provides us the ability to drive out a lot of the tasks that humans by nature just doesn't do well.

Alek: Agreed. But AI can also help not just in reacting to threats, but preventing this in the first place through better, more secure software design.

Q: What are some of those key tasks that machines and AI helps us do from a cybersecurity standpoint?

Alek: Examples of 'good AI' might be: adaptive intrusion detection systems that are able to respond to threats they haven't explicitly encountered before; context-aware text processing for better spam filtering; malware detection as Ryan points out; and automated software testing to find and kill bugs before they ship.

Ryan: Some of the critical tasks that AI can help us with is quick and automated responses to threats, helping to quickly shut an adversary down quickly. We can also utilize machine learning and AI to help us to model normal behaviour of our users versus non-normal or compromised accounts as an example.

Q: But what about 'bad AI?' What are some of the ways hackers are leveraging AI to their advantage?

Alek: Some examples of 'bad AI' would be mutating malware. Rotating the shield harmonics was always a good way to defeat the Borg (sorry, ST:TNG reference). There's also a real opportunity for advanced phishing attacks by automating the human bad guy.

Ryan: I completely agree. Software security and ensuring code is free from vulnerabilities is also a clear area where machines and AI will provide the heavy lifting vs manual human analysis.

Hackers have built sophisticated tools and techniques, which they often share with each other in order to co-ordinate efforts and provide services to the hacking community. For example, hackers leverage AI to sift through volumes of public information to help better target victims. They also can utilize intelligent networks of compromised hosts/computers to launch attacks in a coordinated fashion.

Q: We have a question from one of our viewers.  @evan467 writes: "Recently there have been attempts at designing software and systems such that they are 'hack proof' in a way that the underlying subsystems (i.e. brakes and hydraulics) aren't able to be accessed or modified. This seems like a necessity for things like automated cars and medical equipment, but my question is do you believe this type of design will lead to a safer machine or does it just open new opportunities for hackers that we may just not of thought of yet?

Alek: There's just no such thing as 'hack proof.' But in the case with something like secure internet voting, a research area I'm involved in, physical backups are a common sense protection to put threats out of reach of remote hackers.

Ryan: IoT is one of the biggest themes and areas talked about with respect to security of devices that are literally life and death in some cases. As Alek noted, nothing is hack proof and never will be, but we do need defined standards that we use across this industry to ensure the protection of devices. We also need to ensure those standards are regulated, and that when vulnerability is eventually discovered that it can be quickly mitigated in an automated manner.

Alek: A good example would be driving your car. You cannot eliminate the risk of accidents, and yet we all still drive or take transit. We manage the risk. Bring it down to a tolerable level.

Q: As a follow-up, when do we stop trying to defend people?

Alek: When do we stop? I guess it really depends. That's application specific. For example, there's a culture in software to 'ship crap and fix it later.' How come? Because it just happens to be more economically beneficial to be 'good enough.'

Ryan: I think there's two pieces to that answer. First of all, we have to ensure that devices we provide to consumers meet a certain security standard and is regulated, which it isn't today, and secondly that we provide a risk-based approach and modeling. This will ensure what can be done to protect users is done and when a vulnerability is eventually discovered in code, it has to be fixed properly. We need to demand this from manufactures and we don't today. Today it's all too common for us to allow poorly built vulnerable systems to hit the market.

Q: What are the biggest cybersecurity threats you're seeing right now? (And are there any on the horizon that they should look out for?)

Ryan: Today the top threats facing Canadian organizations are by cybercriminals looking to monetize cybercrime, which typically target employees or end users. Most of these threats leverage email and phishing attacks as a starting point, which leads to the compromise of the endpoint, and eventually theft of data or information that can be monetized. This includes the recent increase were seeing in ransomware and more malicious variants called doxware.

Alek: Some of the biggest threats pertain not to technology, or capability, but rather attitudes. Organizations are turning a blind eye to shipping insecure products, or go after security researchers who disclose vulnerabilities. But also customers' and users' attitudes: they may really want internet voting, whatever the risk, and can't be bothered to educate themselves about the technology and its limitations.

Question: Considering how common cybercrime seems to be, what advice can you give to those looking to implement stronger security measures in their organizations?

Ryan: Security for organizations should also be approached in a risk-based form. Security is about spending the least amount of money to protect your organization, and being comfortable with that residual risk. My general take on security is you shouldn't spend a dollar until you understand what you're trying to protect. How do we do this? We really need to look at three things: prepare, defend and respond. Prepare is about building a proper cybersecurity program taking a risk based and business approach to security. This means understanding what you're trying to protect, and ensuring you put the right mix of safeguards (people, process and technology) in place to manage that risk. Defend is about choosing technology and safeguards that are aligned with a risk based approach to security, and Respond is about ensuring that your application and platforms are effectively security monitored 24 hours a day, 7 days a week to ensure if a breach or compromise does occur that its caught early and effectively managed.

Alek: As Ryan says, don't spend money till you know what you're trying to defend. But the key precursor is to even understand you need to have that conversation. Not everyone realizes that. Security isn't usually anyone's number one priority, until they get hacked. But it needs to be on the radar.

Question: And what if they don't have the budget for expensive AI?

Alek: I'm personally still a little bearish about AI in cyber security. I think it definitely has a role to play in the coming years. But I also see the rhetoric as being pretty overheated. I guess when you're the CTO of an AI company; everything looks like the proverbial nail. But in terms of what companies can do, Ryan has talked about this. One thing they could do is actually know where their data is stored and have some situational awareness of their own infrastructure. Companies need to understand what they're trying to protect, and have a plan in the (inevitable) event of a data breach.

Ryan: AI plays an important role in being able to identify real threats to an organization and in many cases take automated actions. This allows our security teams to focus on what matters, and on higher value work activity as it relates to protecting the organization. Don't discount it. It's in many ways starting to gain commercial viability. We've also seen real strides to making AI more affordable. A real example is zero-day malware protection (which leverages machine learning and AI) is now commercially viable and at a price point where even SMB organizations can afford.

Q: What is your favorite or most fascinating cybercrime and why does it interest you?

Ryan: My favorite social engineering attack involves phishing. I was asked earlier last year by a board of directors to evaluate how effective user awareness training is and how it would stand up to a 'sophisticated' phishing attack. I spent all of seven minutes designing a phishing campaign for this organization (so not a lot of time) and an organization that usually had a sub 10 per cent click through rate on phishing emails, went to 89 per cent when it was somewhat targeted. It proved something I had believed to be true, which is no matter how much user awareness training we provide employees we still need to backstop them with technology and processes. It's difficult to stop a motivated attacker. They only have to find one weakness in your organization.

Alek: Phishing is a huge issue. And it all comes down to user awareness and training. As a professor, I try to take the chance whenever possible to remind people, whether it's your car, your microwave or your computer, you owe it to yourself and to those around you to know something about how it works.

Q: Is there a skills shortage when it comes to cybersecurity? Given your experience at Western, do you believe colleges, universities and other educational programs doing enough to train for the IT careers of the future?

Alek: Schools could do more by investing in more programs and their development. I teach a course on hacking. There are college and professional programs, but it's still pretty rare at the graduate level.


Aleksander Essex on Twitter: @aleksessex
Ryan Wilson on Twitter: @scalardecisions

Interact with The Globe