Redazione RHC : 28 July 2025 07:41
Kaspersky Lab specialists studied the activity of the FunkSec group, which emerged in late 2024. The group’s main characteristics were: the use of AI-based tools (including in the development of ransomware), a high degree of adaptability, and mass cyberattacks.
According to experts, FunkSec attacks organizations in the public sector, as well as the IT, financial, and education sectors in Europe and Asia. FunkSec operators typically demand unusually low ransoms, sometimes as low as $10,000. Attackers also sell the stolen data to their victims at a very low price.
Experts believe that this approach allows them to launch a large number of cyberattacks and quickly build a reputation within the criminal community. Furthermore, the massive nature of the attacks indicates that attackers are using artificial intelligence to optimize and scale their operations.
The report emphasizes that the FunkSec ransomware stands out for its complex technical architecture and use of artificial intelligence. The malware developers have included the ability to fully encrypt and steal data in a single executable file written in Rust. It can terminate over 50 processes on victims’ devices and has self-cleaning capabilities, making incident analysis difficult.
It should also be noted that FunkSec uses advanced methods to evade detection, which complicates researchers’ work.
The FunkSec encryption tool is not provided alone: it also uses a password generator (for brute-force attacks and password spraying) and a DDoS attack tool.
In all cases, researchers found clear signs of code generation using large language models. (LLM). For example, many code fragments were clearly not written manually, but automatically. This is confirmed by the “stub” comments (e.g., “stub for actual verification”), as well as technical inconsistencies. For example, it was noticed that one program uses commands for different operating systems. Furthermore, the presence of declared but unused functions reflects how LLMs combine different code fragments without eliminating unnecessary elements.
“We are increasingly seeing attackers using generative AI to create malicious tools. It speeds up the development process, allowing attackers to adapt their tactics more quickly and also lowering the barrier to entry into the industry. However, the code generated by this type of malware often contains errors, so attackers cannot fully rely on new technologies being developed,” comments Tatyana Shishkova, Kaspersky GReAT expert.

The editorial team of Red Hot Cyber consists of a group of individuals and anonymous sources who actively collaborate to provide early information and news on cybersecurity and computing in general.