About
Blog
Book an Appointment
facebook linkedin twitter instagram
  28 March 2024

CyberSecurity

Interview: AI vs AI: the new cybersecurity challenge

"The imperative to do AI research is now not optional. We have to build better and better AI security systems because you can bet your life the criminal elements of the hacking world are busy developing AI weapons to attack us..." - Bill Rue. Artificial Intelligence is one of those subjects that is constantly in the headlines at the moment; controversial, endlessly debated and widely misunderstood.

Written by Emmanuel Marshall, Bill Rue, Published on: 20 May 2020

Written by Emmanuel Marshall, Bill Rue
Published on: 20 May 2020

First published at https://www.mailguard.com.au/blog/ai-vs-ai on 09 May 2018 - updated edition below

"The imperative to do AI research is now not optional. We have to build better and better AI security systems because you can bet your life the criminal elements of the hacking world are busy developing AI weapons to attack us..." - Bill Rue.

Artificial Intelligence is one of those subjects that is constantly in the headlines at the moment; controversial, endlessly debated and widely misunderstood.

AI promises to be a game-changer for cyber-security, so I sat down with Bill Rue while he was Chief Technology Officer at MailGuard, and asked him to give me his perspective on why AI is such an important technology.

Bill Rue's extensive experience in the IT world included roles as Technology Strategist for Microsoft as well as experience with military technology systems, so his insights into AI development and cybersecurity are based on real-world experience with user-facing technology.

Interview with Bill Rue

EM: Bill, how do you think AI is going to change the cybersecurity landscape?

Bill Rue: That’s kind of the unanswerable question, because AI is still a technology in its infancy, and we really can’t predict in any sort of realistic way what it might look like in even 3 to 5 years from now. Futurists trying to predict what disruptive technology will look like rarely get it right. Having said that, there’s real concern amongst some scientists and technology thinkers that future AI could be potentially weaponised; turned against us. There’s a lot of speculation about very powerful AI machines becoming self-aware and humans losing control of them, but even if we disregard that more ‘science-fiction’ sort of speculation, there are still other ways that AI could be a security issue.

EM: So there’s an inherent problem with AI that it could be used as a weapon as well as a tool?

Bill Rue: At the software level, even simple software can be weaponised. We know that because we’ve already seen it happen in cybersecurity incidents like; NotPetya; like Wannacry.
The software doesn’t have to be intentionally malicious to be dangerous. Most cyber-attacks at the moment start with infiltration - like malware being delivered via an email - because hijacking systems are more efficient than building new weapons. It’s basically a lot easier for criminals or terrorists to grab systems that already exist and take control of them than build stuff of their own.

AI is just technology and primitive AI is already within the reach of regular people now. There are open-source AI platforms being built by big companies that malicious actors can download and exploit.

Even governments can see the value of that sort of ‘hacker’ approach to weaponising technology. Recently we saw that example from Wikileaks where the CIA hoarded exploits that they thought might be useful as weapons and then those exploits were used by criminals when they got into the wild.

EM: Can cybercriminals exploit AI to make hacking and infiltration easier?

Bill Rue: What we should be concerned about is malicious actors getting hold of models of our security systems and training their AI to defeat it. That’s the main reason we are now committed to an ongoing AI arms race. Cybercriminals want to use AI to attack just like we in the security world want to use it to defend. AI builds a model of a problem and gets better and better at achieving its goals. So, before they send their AI-based attack out to the target they will create a sandbox environment and train their AI to understand the weaknesses of our defences.

↳ Photo by David McBee from Pexels

EM: Do you think AI is going to be mostly an asset or a threat in a cybersecurity sense?

Bill Rue: AI is currently pretty basic. It’s not very smart - yet. As it gets smarter - and if you look at some stuff being developed by teams like the Korean researchers that won the DARPA challenge robot or Boston Dynamics you know that we’re moving fast toward independent AI - the big problem could be AI systems that can self-organise.

Part of my background is military, so I tend to think in those terms. AI is built by humans and humans are really good at making weapons.

Think about drones that can self-organise. That already exists. Now add guns to the equation and you start to see the potential problem. Actually, drones don’t even need to have guns to be a serious threat. Imagine you devise a swarm of self-organising drones and you send them to fly over an airport or an air force base; you get a swarm of drones in a jet engine and suddenly you’re dealing with crashing aircraft and potentially the loss of airpower capability.

Extrapolate from that to non-physical software and we need to be thinking about swarms of virtual bots zooming around the internet and identifying weaknesses in security systems, breaking into them and stealing whatever they find, all without human guidance or intervention. That’s a very real possibility.
We’re seeing good evidence of cyber-attack actions even at the nation-state level; where governments are using hacking tools and malware to attack other countries. It’s a very grey area but once the malware is being used as a political tool or to damage state infrastructure then you really are talking about weaponisation. Governments are already using conventional malware as a weapon, so using AI would seem to be the next logical step, once it’s available.

The imperative to do AI research is now not optional. We have to build better and better AI security systems because you can bet your life the criminal elements of the hacking world are busy developing AI weapons to attack us.

EM: So we’re in an arms race with cybercriminals to build better AI weapons; could a virus hunter AI be perverted into a virus trainer? Could black hat hackers potentially get our defensive AI software and use it to train better attack software?

Bill Rue: In a way, we’re already seeing something like that happening. We know that some malicious software now won’t run if it senses that it’s inside a “dump centre”; a sandbox environment where we study and learn about viruses. Cybercriminals are building that sort of functionality into their “bugs” so that it’s harder for us to take them down.

Virus intelligence is basic at the moment - it’s still not “true AI.”

We have a crucial window of opportunity at the moment to build AI systems that are at the very cutting edge so we don’t get pipped for line honours by the bad guys.

Criminals do, and will, obtain benign software and use it against us, so that’s the reason we need to keep doing research and keep evolving our systems to keep ahead of the criminal’s tools.

EM: What will future AI security tools feel like to a customer?

Bill Rue: The AI tools aren’t necessarily going to be that visible to the customer. The AI we’re working with is just observing and monitoring in a way that people don’t have the time and patience to do. It works in the background doing a lot of very fast repetitive tasks that our customers don’t notice - and that’s the whole point - the system is unobtrusive and actually enhances the user experience rather than intruding on it.

AI finds suspicious actions and then comes up with possible solutions to resolve threats based on predefined activity; that’s its learning process. The big difference with future AI will be that at the moment, we - the humans - have to act on the issues AI detects but in the future, our true AI systems will find the problem, find the solutions, implement them, and then predict the next problem that could arise, all on their own.

For the end-user it will mean a faster, better, more secure service that will streamline communication and minimise interference - they won’t be aware of the AI component because it won’t require them to do anything.

EM: If you were a betting man, Bill, what would be your predictions for the way AI is going to change the cybersecurity landscape over the next 3 to 5 years?

Bill Rue: Essentially, AI will help define signal from noise and help us focus effort to get the best protection. When we talk about cybersecurity we’re talking about very, very big data sets, and handling all that data is a slow and difficult task for humans, but it’s what AI does best.

AI could also make cybercriminals more dangerous though, so the mission is to make good choices about how to use it.

We have to start using AI as one of our security spearheads because the cybercriminals are definitely going to use it to try and achieve their ends.