Ransomware is a big thorn in the side of today’s digital economy.
In the United States, victims of ransomware attacks paid out more than $24 million dollars in ransom in 2015, according to the FBI. And that’s only the American payouts that the FBI knows about.
The global impact is far, far worse.
Terminating the threat of attacks is nearly impossible using standard technologies.
Often triggered by general phishing emails, or by personally targeted spear phishing messages, ransomware seems to be able to trick even the most cautious humans.
Enter artificial intelligence.
Traditional cybersecurity solutions study malicious activity after they have already infiltrated an organisation or consumer device. The threat is then studied and added to a database. This database of threats is called the signature database. This database then gets distributed to a manufacturer’s software clients, and they actively watch for threats from that list.
According Stuart McClure, chief executive officer for cybersecurity firm Cylance, this traditional approach is an antiquated one.
“You always need a sacrificial lamb; someone needed to be hacked first,” he says. “AI is about eliminating that sacrificial lamb.”
AI is increasingly able to detect and block the myriad versions of ransomware trojans and viruses. Cloud-based anti malware tools can throw nearly unlimited firepower against ransomware hackers.
Cylance makes software that predicts, then blocks, cyberattacks on the endpoint in real time using pre-execution artificial intelligence algorithms. The solution proactively prevents, rather than reactively detects, the execution of advanced persistent threats and malware.
Cylance leverages the power of machines, not humans, to dissect malware’s DNA.
The company has hundreds of enterprise clients worldwide, including Fortune 100 organisations and government institutions.
McClure says there are three ways to address cyber attacks: prevent the attack, detect the attack only, respond.
“We are all about prevention. We don’t care who did the attack, all we care about is that it’s blocked,” he explains.
McClure says they are already seeing the ability of this technology to truly predict attacks.
“This kind of technology and this kind of learning is what is going to save our industry,” he says.
When it comes to the fear of wholly unsupervised classification, the growth of artificial intelligence, the Skynet theory, McClure says we are still a while away from that reality.
“We’re a long way away of having unsupervised classification. We’re know where near that,” he says. “Maybe in ten, twenty years we’ll have a different discussion. As of now we’re a far way off having the Skynet-making-decisions-for-us-theory.”
Kathryn Hume, president, Fast Forward Labs, which helps organisations accelerate their data science and machine intelligence capabilities, agrees with McClure, but says the danger of that Skynet reality is not artificial intelligence.
“It’s not because computers are dangerous, it’s because humans are dangerous,” she says. “Algorithms tend to amplify the stupid decisions we as humans have already made. They use decisions we have already made in the past,” Hume explains.
McClure says cyber security is the perfect place to apply AI. “I actually don’t know why it hasn’t been done already,” he ponders.
McClure says artificial intelligence works in the security space because there is nothing new.
“There may be new twists to an old technique, but how different ways can you break into your house? There’s no new techniques,” he explains.
Hume adds, “AI systems aren’t as smart as we think they are. They work off what already exists.”
“Security is not 100%, it’s not a finish line. That’s impossible. There’s always someway to do something malicious. But the trick is to elevate the expense, the time, the effort it takes for people to bypass it,” McClure explains.
“It’s good because it makes it harder for the adversary to change their attacks to get past the technology,” he says.
“Let’s turn the tables on the bad guys.”