Solutions Review’s Expert Insights Series is a collection of contributed articles written by industry experts in enterprise software categories. Dr. Jason Zhang of Anomali warns all involved in cybersecurity that the AI era is here, and we all need to be ready to brave the new world.
Artificial Intelligence (AI) is going mainstream, and that will impact cybersecurity in significant ways. But if we’re not prepared for what’s ahead, we’re destined to learn about it the hard way.
From a defender’s perspective, the advent of AI is a welcome development. It can play an incredibly useful role in identifying malicious behavior from the millions of different events taking place each day across a network. For instance, it’s increasingly common for the bad guys to leverage legitimate software used in the Windows system to execute malware. That makes it hard to differentiate between malicious and legitimate behavior, which both use similar tools to execute. But here’s an instance where leveraging machine learning and AI improves the signal-to-noise ratio in the system to help security teams differentiate between the malicious and the benign.
That’s all for the good. However, at the same time, AI now serves as a valuable weapon for cyber-criminals and other malicious actors to analyze huge amounts of data. That helps them more precisely target victims and automate their processes to speed up the cadence of attacks, making it harder for targeted victims to keep up with the pace of their cyber barrages.
The AI Era in Cybersecurity
In this accelerated race against the clock, cyber-criminals can scale up the processes required to launch an attack faster than ever thanks to automation. That leaves defenders scrambling; by the time they fix one breach, they’re trying to deal with new attacks, making it harder than ever to detect unknown malware or new entry vectors.
None of this should come as a surprise. We’re talking about a technology that anyone can access. It doesn’t require the special collective expertise of a hacker group working at the behest of a wealthy nation-state. Many so-called lone wolves working on their own now have the requisite knowledge to launch sophisticated attacks with the help of AI.
The threat landscape is fluid, always changing over time. If we step back from the fray and view this in a broader context, we can think about the AI era in cybersecurity as yet another installment in the decades-long arms race between defenders and attackers. And while threat actors are getting faster, defenders can also use AI to their advantage to be more proactive, leveraging the technology’s powerful automation capability to analyze massive amounts of data and more rapidly identify attack patterns.
So, who’s winning? Despite the accelerated cadence of attack introduced by AI, the fundamental framework for dealing with threats hasn’t changed much through the years. Attackers still need to go through different stages of a probe to harvest a credential and realize their financial objectives. I think that ultimately, this battle will turn on old-school principles governing cyber defense and whether defenders do the right things when it comes to people, processes, and technology (PPT), a framework that has been around since the early 1960s that’s as relevant as ever.
Final Thoughts and Best Practices in the AI Era
As we navigate our way into the AI era, keep the following signposts in mind:
- Clearly, it’s important to remain current with the latest in defensive technology, but that can’t come at the expense of paying attention to the role humans play in security. In my experience, it’s the most critical layer of all. If you fail to adequately train your staff about security awareness, you can buy as many new security products as you like, but still won’t be protected.
- The reality is that your people will click open malicious attachments or links that allow attackers entry into your networks. The advent of AI and the introduction of new threats only makes this more likely, further underscoring the importance of inculcating best practices.
- If your organization opts to use generative AI like ChatGPT to help with repeatable work, employees must learn to use it properly. When they share data in ChatGPT, the information could wind up as training data for machine learning/large language models outside your organization. This isn’t a theoretical problem – look at what happened recently at Samsung Electronics, when engineers accidentally leaked company information via ChatGPT – and on different occasions! Again, this goes back to the importance of having strong policies and processes that keep up with the times and protect company and customer data.
- In our post-pandemic world, many more people now work regularly from home, a societal shift that has introduced a myriad of new challenges when it comes to cyber protection given the number and types of devices accessing network data from remote locations. All the more reason why organizations shouldn’t delay adapting their processes to deal with new security challenges introduced by this change.
- Attackers can leverage AI and automation, but they can’t automate the entire process or all of the different stages of the infection chain. At best, they can possibly automate one or two stages. That puts the onus on defenders to build up a well-designed, multi-stage protection system. Even if some attacks bypass the first couple of stages of defense, the system will still detect the threat at a later stage to prevent any actual damage from occurring.
These basic security tenets will become increasingly important as AI ups the ante in the battle over cybersecurity. There’s nothing unstoppable about malicious actors using these latest techniques. But changes – big changes – are coming, and defenders need to be prepared for what’s on the horizon.