There’s the saying that “nice guys finish last” and, at least in cybersecurity circles, that seems to be the case.
Burnout, high stress and job fatigue are common in cybersecurity, a byproduct of pitting small security teams against wave after wave of adversaries. They face the equivalent of kayaking upstream in Class-5 rapids — a task that is exhausting, unsustainable and a little bit insane.
What security teams need is more manpower, more time, more eyes on their attack surfaces and the ability to respond to threats in a moment. Propelled by meteoric advancements in generative AI, they might finally get their wishes fulfilled.
AI can analyze threat data faster than any human
When was the last time you heard a security team complain they don’t have enough work? Sifting through logs, threat alerts and a constant stream of notifications is a laborious, time-consuming process.
Even in best-case scenarios, when teams have full visibility of their attack surfaces, they simply don’t have enough hours in a day to manually absorb and act on intelligence. That makes AI’s speed and precision a gamechanger for the modern SOC.
“It’s like a cyber guardian angel, digging through massive amounts of data at lightning speed and spotting threats that would take us as humans ages to uncover,” says Joshua Spencer, founder of BastionGPT, which bills itself as a ChatGPT for healthcare professionals.
“Security teams are constantly in react mode, delivering maximum effort but always a step behind the attackers,” says Glen Pendley, Chief Technology Officer at Tenable. “Generative AI changes that by making people much more efficient. Its power is in cutting through complexity to enable security teams to work faster, search faster, analyze faster and make decisions faster.”
The speed of AI calculation means good data doesn’t have to go to waste.
“The amount of information that flows into a security operations center (even of modest size), can be truly staggering,” says Robert Ricco, a threat intelligence analyst at GroupSense. “All of this data is basically useless until it is analyzed, collated and digested. Combine this with limited budgets and a crippling shortage of cybersecurity professionals and you have a situation where threats slip between the cracks and anomalous patterns are missed entirely.”
We’re already seeing generative AI and LLMs use that speed to pull relevant bits of context when presenting information, which has huge implications for how incident responders find and eliminate threats.
“It’s the equivalent of having a command prompt into which you can type ‘has my organization been impacted by any zero-day attacks in the last 24 hours?’ and then getting an answer with contextual sensitivity and information pulled from many different sources, which would take a human an order of magnitude longer to accomplish,” says Rafal Los, Head of Services Strategy at ExtraHop.
AI can cut through noise to spot real vulnerabilities earlier
AI’s ability to identify security weaknesses and distinguish real threats from false positives presents white hats with another arrow in their quiver.
“As a defender, one of the biggest challenges is wading through noise,” says Anthony Green, Chief Technology Officer at FoxTech Cyber. “Depending on the context, vulnerabilities that are a big deal in one IT system may be largely insignificant in another. AI can help defenders wade through the oceans of alerts and information to find the risks that really matter – and see the iceberg up ahead before the ship crashes into it.”
On top of this, AI can be used by engineers to verify the integrity of new code as it is being written rather than checked manually by staff, thus reducing the likelihood of vulnerabilities being discovered in production. The experience is not unlike writers using a spell checker to proof their draft.
“We can shift the use of AI left and eliminate the vulnerabilities to begin with, as opposed to sifting through network logs, network traffic, and trying to find attackers,” says Chris Wysopal, Chief Technology Officer and co-founder of Veracode. “With the increase in automated attacks, it’s no longer tenable to continue to remediate flaws entirely manually.”
AI will push threat hunting operations to new heights
Threat hunting is the proactive investigation of potential threats to an organization, a discipline reserved for only the most elite cybersecurity practitioners. Far from replacing threat hunters, AI can make them much more formidable. Black hats, beware!
One way AI can benefit threat hunting programs is by shortening the learning curve, providing aspiring hunters a faster path to competency.
“A threat hunter’s job is to translate concepts into queries. AI-based platforms that can embellish threat hunting-related questions to make the queries more expressive can quickly move entry-level threat hunters to the next level, and make veterans even more efficient and effective,” says Morgan Wright, Chief Security Advisor at SentinelOne. After all, he says, “machine speed attacks require machine speed responses.”
Other experts tout AI’s potential to stage massive threat simulations and track how an organization’s defenses hold up to those scenarios.
“These simulations not only help organizations reveal weaknesses in their security architecture, but they also predict the likelihood that each weakness will be exploited,” writes Justin Shattuck, Chief Information Security Officer at Resilience, a cyber risk solutions company. “AI-driven models like this effectively quantify an organization’s risk and exposure, allowing a security team to more effectively see which vulnerabilities need to be immediately addressed, and prioritize the most impactful security investments accordingly. “
By running simulations, AI can theoretically automate an organization’s playbook for how to respond based on indicators of compromise that arise from any given threat scenario. While this application may be a few years away, companies are already laying the groundwork.
“I believe it won’t be too far out where security tools can learn from the user environment and then develop automated security playbooks based on a series of user prompts to orchestrate and automate response,” says Shannon Murphy, Global Security and Risk Evangelist at Trend Micro. “The evolution of this would be to develop such playbooks completely autonomously based on the user data and user environment by detecting anomalous behavior.”