IT Brief New Zealand - Technology news for CIOs & IT decision-makers
Story image
What happens when AI falls into the wrong hands
Mon, 15th Apr 2019
FYI, this story is more than a year old

Think about what would happen when attackers start using the power of deep learning and machine learning for their advantage?

That being said, currently, the use of AI for attackers is mainly being used in academia and not in practical attacks.

But there's a lot of talk in the industry about attackers using AI in their malicious efforts, and defenders using AI as a defence technology.

Here's how to make sense of it all.

There are three types of attacks in which an attacker can use AI:

1. AI-boosted/based cyber-attacks – In this case, the malware operates AI algorithms as an integral part of its business logic. For example, using AI-based anomaly detection algorithms to indicate irregular user and system activity patterns.

Unusual patterns can lead to different malware behaviour, increased or decreased evasion and stealth configurations, and communication times. Situational awareness is implemented in malware for a long time, but AI can offer much more accurate and adaptive approaches.

An interesting use case can be found in DeepLocker, presented by IBM Security researches in Black Hat USA 2018. DeepLocker is encrypted ransomware which autonomously decides which computer to attack based on a face recognition algorithm - meaning only when the target is recognized by the camera (after using face recognition techniques to identify) the attack takes place.

There are other hypothetical use cases, which might be a part of malware business logic. Consider “Anti-VM”, for instance. Sophisticated malware tends to check if it runs on a virtual machine (VM), to avoid operating its malicious activities on sandboxes, which will reveal the file is malicious, or to avoid being analysed by a security researcher, which might reveal how it works.

In order to assist their Anti-VM efforts, malware writers can train a VM environment classifier, that would get environment details (e.g., registry keys, loaded drivers, etc.) as features and understand whether the host the malware is running on is a VM or not. Moreover, such a model can resolve some of the difficulties malware have when they run on cloud hosts, which are also VMs, but not security research-oriented VMs, increasing the malware spread.

2. AI-based attack infrastructure and frameworks – in this case, the malicious code and malware running on the victim's machine do not include AI algorithms, however, AI is used elsewhere in the attacker's environment and infrastructure – on the server side, in the malware creation process etc.

For instance, info-stealer malware uploads a lot of personal information to the C-C server, which then runs an NLP algorithm to cluster and classify parts of the information as “interesting” (credit card numbers, passwords, confidential documents, etc.).

Another example for this would be #TheFappening attack, where celebrity photos stored on iCloud were leaked. An attack like this one could have taken place on a much larger scale if it was an AI facilitated attack. For instance, computer vision machine-based algorithms could be used to review millions of pictures and identify which of them contains celebrities and then expose only the matching ones, similar to the ones leaked in #TheFappening.

Another example of an AI-facilitated cyber-attack can be a spear-phishing attack, as described in the report: The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation.

In phishing techniques, the target is “fooled” by a superficial trustworthy façade tempting them to expose sensitive information/money, whereas a spear-phishing attack involves collecting and using the information specifically relevant to the target, causing the façade to look even more trustworthy and more relevant.

The most advanced spear-phishing attacks require a significant amount of skilled labour, as the attacker must identify suitably high-value targets, research these targets' social and professional networks, and then generate messages that are plausible within this context. Using AI – and specifically generative NLP models, this can be done at a much larger scale, and in an autonomous way.

3. Adversarial attacks – In this case, we use “malicious” AI algorithms to subvert the functionality of “benign” AI algorithms. This is done by using the same algorithms and techniques used in traditional machine learning, but this time it's used to “break” or “reverse-engineer” the algorithm(s) of security products.

For instance, Stochastic Gradient Descent which is a technique used to train deep learning models which can be used by adversaries to generate samples that are misclassified by machine learning or deep learning algorithms.

One example of adversarial learning is placing a sticker in a strategic position on a stop sign, causing it to be misclassified by an image recognition street sign classifier as a speed limit sign.

Another example for this attack is injecting malicious data streams into benign traffic in order to cause an anomaly detection-based network intrusion detection system (NIDS) to block legitimate traffic effectively causing Distributed Denial of Service attacks (DDoS).

Such attacking techniques have been developed by researchers against computer vision algorithms, NLP and malware classifiers.

We believe the AI vs. AI trend will continue to increase and cross the boarders from academic POCs to actual full-scale attacks as computing powers (GPUs) and deep learning algorithms become more and more available to the public.

In order to have the best defence, you need to know how attackers operate. Machine learning and deep learning experts need to be familiar with these techniques in order to build a robust system against them.