Home / Technology / How AI will automate cybersecurity in the post-COVID world

How AI will automate cybersecurity in the post-COVID world

By now, it is obvious to everyone that widespread remote working is accelerating the trend of digitization in society that has been happening for decades.

What takes longer for most people to identify are the derivative trends. One such trend is that increased reliance on online applications means that cybercrime is becoming even more lucrative. For many years now, online theft has vastly outstripped physical bank robberies. Willie Sutton said he robbed banks “because that’s where the money is.” If he applied that maxim even 10 years ago, he would definitely have become a cybercriminal, targeting the websites of banks, federal agencies, airlines, and retailers. According to the 2020 Verizon Data Breach Investigations Report, 86% of all data breaches were financially motivated. Today, with so much of society’s operations being online, cybercrime is the most common type of crime.

Unfortunately, society isn’t evolving as quickly as cybercriminals are. Most people think they are only at risk of being targeted if there is something special about them. This couldn’t be further from the truth: Cybercriminals today target everyone. What are people missing? Simply put: the scale of cybercrime is difficult to fathom. The Herjavec Group estimates cybercrime will cost the world over $ 6 trillion annually by 2021, up from $ 3 trillion in 2015, but numbers that large can be a bit abstract.

A better way to understand the issue is this: In the future, nearly every piece of technology we use will be under constant attack – and this is already the case for every major website and mobile app we rely on.

Understanding this requires a Matrix-like radical shift in our thinking. It requires us to embrace the physics of the virtual world, which break the laws of the physical world. For example, in the physical world, it is simply not possible to try to rob every house in a city on the same day. In the virtual world, it’s not only possible, it’s being attempted on every “house” in the entire country. I’m not referring to a diffuse threat of cybercriminals always plotting the next big hacks. I’m describing constant activity that we see on every major website – the largest banks and retailers receive millions of attacks on their users’ accounts every day. Just as Google can crawl most of the web in a few days, cybercriminals attack nearly every website on the planet in that time.

The most common type of web attack today is called credential stuffing. This is when cybercriminals take stolen passwords from data breaches and use tools to automatically log in to every matching account on other websites to take over those accounts and steal the funds or data inside them. These account takeover (“ATO”) events are possible because people frequently reuse their passwords across websites. The spate of gigantic data breaches in the last decade has been a boon for cybercriminals, reducing cybercrime success to a matter of reliable probability: In rough terms, if you can steal 100 users’ passwords, on any given website where you try them, one will unlock someone’s account. And data breaches have given cybercriminals billions of users’ passwords.

Above: Source: Attacks Against Financial Services via F5 Security Incident Response Team in 2017-2019

What’s going on here is that cybercrime is a business, and growing a business is all about scale and efficiency. Credential stuffing is only a viable attack because of the large-scale automation that technology makes possible.

This is where artificial intelligence comes in.

At a basic level, AI uses data to make predictions and then automates actions. This automation can be used for good or evil. Cybercriminals take AI designed for legitimate purposes and use it for illegal schemes. Consider one of the most common defenses attempted against credential stuffing – CAPTCHA. Invented a couple of decades ago, CAPTCHA tries to protect against unwanted bots by presenting a challenge (e.g., reading distorted text) that humans should find easy and bots should find difficult. Unfortunately, cybercriminal use of AI has inverted this. Google did a study a few years ago and found that machine-learning based optical character recognition (OCR) technology could solve 99.8% of CAPTCHA challenges. This OCR, as well as other CAPTCHA-solving technology, is weaponized by cybercriminals who include it in their credential stuffing tools.

Cybercriminals can use AI in other ways too. AI technology has already been created to make cracking passwords faster, and machine learning can be used to identify good targets for attack, as well as to optimize cybercriminal supply chains and infrastructure. We see incredibly fast response times from cybercriminals, who can shut off and restart attacks with millions of transactions in a matter of minutes. They do this with a fully automated attack infrastructure, using the same DevOps techniques that are popular in the legitimate business world. This is no surprise, since running such a criminal system is similar to operating a major commercial website, and cybercrime-as-a-service is now a common “business model.” AI will be further infused throughout these applications over time to help them achieve greater scale and to make them harder to defend against.

So how can we protect against such automated attacks? The only viable answer is automated defenses on the other side. Here’s what that evolution will look like as a progression:

Right now, the long tail of organizations are at level 1, but sophisticated organizations are typically somewhere between levels 3 and 4. In the future, most organizations will need to be at level 5. Getting there successfully across the industry requires companies to evolve past old thinking. Companies with the “war for talent” mindset of hiring huge security teams have started pivoting to also hire data scientists to build their own AI defenses. This might be a temporary phenomenon: While corporate anti-fraud teams have been using machine learning for more than a decade, the traditional information security industry has only flipped in the past five years from curmudgeonly cynicism about AI to excitement, so they might be over-correcting.

But hiring a large AI team is unlikely to be the right answer, just as you wouldn’t hire a team of cryptographers. Such approaches will never reach the efficacy, scale, and reliability required to defend against constantly evolving cybercriminal attacks. Instead, the best answer is to insist that the security products you use integrate with your organizational data to be able to do more with AI. Then you can hold vendors accountable for false positives and false negatives, and the other challenges of getting value from AI. After all, AI is not a silver bullet, and it’s not sufficient to simply be using AI for defense; it has to be effective.

The best way to hold vendors accountable for efficacy is by judging them based on ROI. One of the beneficial side effects of cybersecurity becoming more of an analytics and automation problem is that the performance of all parties can be more granularly measured. When defensive AI systems create false positives, customer complaints rise. When there are false negatives, ATOs increase. And there are many other intermediate metrics companies can track as cybercriminals iterate with their own AI-based tactics.

If you’re surprised that the post-COVID Internet sounds like it’s going to be a Terminator-style battle of good AI vs. evil AI, I have good news and bad news. The bad news is, we’re already there to a large extent. For example, among major retail sites today, around 90% of login attempts typically come from cybercriminal tools.

But maybe that’s the good news, too, since the world obviously hasn’t fallen apart yet. This is because the industry is moving in the right direction, learning quickly, and many organizations already have effective AI-based defenses in place. But more work is required in terms of technology development, industry education, and practice. And we shouldn’t forget that sheltering-in-place has given cybercriminals more time in front of their computers too.

Shuman Ghosemajumder is Global Head of AI at F5. He was previously CTO of Shape Security, which was acquired by F5 in 2020, and was Global Head of Product for Trust & Safety at Google.

Let’s block ads! (Why?)

VentureBeat

About

Check Also

The scale of ambition in gaming is getting bigger | Brian Ward fireside chat

The scale of ambition for Saudi Arabia when it comes to moving into the games …