Elastic AI agent flags Axios npm backdoor within minutes

Elastic Security Labs’ LLM agent caught a backdoor in a new Axios npm release within minutes, prompting alerts to maintainers and rapid publication of detection rules.
Elastic Security Labs reports its lightweight agent, powered by a large language model, flagged a backdoor in the latest Axios package on npm within minutes of release. Researchers notified maintainers and began publishing detections as they analyzed the code.
A researcher built the monitoring pipeline after returning from the RSA Conference, setting it up on a Friday to watch code changes across major repositories and triage them with an LLM. The system tracked the 15,000 most-downloaded packages on PyPI and npm and posted alerts to Slack. Three days later, it triggered shortly after the tampered Axios version appeared.
Axios is a JavaScript library used to make HTTP requests. It is part of the npm ecosystem, the default package manager for Node.js, and sees more than 100 million downloads each week. Following the alert, Elastic’s team contacted the Axios maintainers and started a deep review of the affected release.
“We reverse-engineered the whole thing, found exactly what was happening, published detections for it and published the findings as they were happening — so it was all real time, and this was happening, late night, America time,” according to James Spiteri, who leads Elastic Security Labs. He noted that other researchers quickly shared data and that “it was pretty incredible how everyone rallied together.”
Elastic has open-sourced the monitoring tool. Spiteri described the approach as intentionally lightweight: watch code pushes, use an LLM to assess risk, and alert humans immediately for review. Early detection sped the response, from alerting maintainers to shipping detection rules for users.
The team is testing further uses for LLMs, including help with identifying threat actors and emulating attacks to build better detections. Rapid gains in code-analysis capabilities could aid security operations, incident response and detection engineering. “These models have been getting better and better and better,” Spiteri observed. “They only get better when people try things and they fail, and they push the model vendors to keep pushing the boundaries.”
Spiteri oversees generative AI and automation efforts for Elastic Security. He previously worked in product marketing and solutions architecture and has built custom SIEM platforms for security teams across industries.








