Agentic AI
,
Application Security
,
Artificial Intelligence & Machine Learning
Lightweight LLM-Driven Process Alerted Elastic’s Security Team, Says James Spiteri
The security community rapidly responded to the recent supply-chain attack against the popular JavaScript library Axios, and credit goes in no small part to the Elastic Security Labs team – and a hastily created artificial intelligence agent.
See Also: AI & The Trust Dilemma
Returning from the RSAC Conference last month, one of its researchers decided on a Friday to quickly build a lightweight pipeline designed to use a live AI agent to “monitor changes as they get pushed to package repos,” using a large language model “to determine if the changes are malicious” and, if so, to alert him on Slack. He pointed it at the top 15,000 PyPI and npm packages, by download count.
Just three days later, an alert sounded, minutes after someone backdoored the latest distribution of Axios, the widely used JavaScript library for making HTTP requests, which gets downloaded more than 100 million times per week. It’s part of npm – the default package manager for the GitHub-maintained JavaScript runtime environment Node.js.
The researcher quickly reached out to Axios’ maintainers on social platform X, to try and alert them. “Thankfully, it got a lot of attention – really, really quick,” said James Spiteri, the head of Elastic Security Labs, which began responding to the incident.
“We reverse-engineered the whole thing, found exactly what was happening, published detections for it and published the findings as they were happening – so it was all real time, and this was happening, late night, America time,” Spiteri said. Other security researchers also began sharing findings, and “it was pretty incredible how everyone rallied together,” he said (see: Backdooring of JavaScript Library Axios Tied to North Korea).
The success of this supply-chain monitoring tool, which Elastic has open sourced, is due in part to researchers continuing to experiment with what the latest LLMs can do, and not being afraid to fail, Spiteri said.
“These models have been getting better and better and better, which is amazing to see, because they only get better when people try things and they fail, and they push the model vendors to keep pushing the boundaries,” he said.
In this video interview with ISMG, Spiteri also discussed:
- Lessons learned from the success of this AI-driven tool designed to spot supply-chain attacks in widely used repos;
- Potential future use cases, including identifying threat actors and emulating attacks to build better detections;
- How rapid advances in the latest LLMs, including their code-analysis capabilities, could bolster many security roles.
Spiteri leads generative AI and automation efforts for Elastic Security. Previously, he served as director of product marketing and as a security specialist on Elastic’s solutions architecture team, helping customers and users worldwide architect their Elastic deployments for security analytics. Before joining Elastic, Spiteri built custom SIEM platforms for security operations centers across various different sectors and industries. He is also the creator of explainpls.ai, ohmymalware.com, whichphish.com and log4shell.threatsearch.io, and a regular speaker at several conferences around the globe, including RSAC, Black Hat, SecTor, AWS ReInvent, Google NExt, GITEX and GISEC.
Click Here For The Original Source.
