With rapidly advancing LLM technology, the cost of developing software has already declined significantly over the past year—a trend that is likely to continue. As expected, this affects all forms of software, including malware. X‑Force believes this shift will fundamentally alter the dynamics of the malware threat landscape, compelling defenders and the threat intelligence community to adapt to these emerging technologies.
First and foremost, as widely discussed already, the immediate effects of adversarial use of AI is expected to act as a force multiplier for attackers. This is a numbers game, and is not yet likely to bring noticeable upgrades in terms of malware sophistication, contrary to the AI malware “doom and gloom” hype spread during the early days of AI. However, as evident in traditional malware, attackers often don’t need fancy techniques and implementations to be successful—they use whatever works. Therefore, underestimating this effect is one of the biggest mistakes defenders can make.
On the threat intelligence side, the industry often relies on malware for attribution and for estimating actors’ capabilities. With a rise of ephemeral malware, which can be single use and re-developed on the fly, this equation will change. Disparate, largely similar malicious C2 clients will become significantly more difficult to attribute to a single developer in the future, knowing that the effort needed to create it is just a fraction of what it used to be. Even worse, spotting LLM-generated malware will become more difficult as well. Script-based malware such as Slopoly may still contain several obvious giveaways, but for compiled payloads this is a much more complex problem.
Looking into the future, AI-generated malware is only the first stage in a new arms race between defenders and attackers. The second stage is the use of agentic AI, and AI-integrated malware, which allow models to make decisions during all phases of the attack chain or during development and testing of advanced C2 frameworks. These technological improvements are already being adopted in singular cases, either by highly-capable and well-resourced actors or as proof-of-concepts (PromptSpy, PromptLock, PROMPTFLUX, VoidLink). Similarly to the first stage of AI adoption, threat actors will integrate these into their attacks at varying timelines. While Hive0163 may still be in an early phase of AI adoption, the future potential of state-of-the-art AI technologies in the hands of an already highly disruptive threat actor poses an imminent risk to defenders.
Finally, the impact on the threat landscape will primarily depend on threat actors’ accessibility of weaponizable AI. State-sponsored actors may have access to proprietary AI technology, while other well-resourced actors may use paid AI services (or stolen API keys) attempting to evade their security mechanisms. But most actors are likely to rely on free and anonymous AI access or self-hosted models. An uncontrolled release of technologies without sufficient security measures could unleash a Pandora’s box, which is something defenders should be prepared for.