Experimental PromptLock ransomware uses AI to encrypt, steal data | #ransomware | #cybercrime


Threat researchers discovered the first AI-powered ransomware, called PromptLock, that uses Lua scripts to steal and encrypt data on Windows, macOS, and Linux systems.

The malware uses OpenAI’s gpt-oss:20b model through the Ollama API to dynamically generate the malicious Lua scripts from hard-coded prompts.

How PromptLock works

According to ESET researchers, PromptLock is written in Golang and uses the Ollama API to access the gpt-oss:20b large language model. The LLM is hosted on a remote server, to which the threat actor connects through a proxy tunnel.

The malware uses hard-coded prompts that instruct the model to generate malicious Lua scripts dynamically, including for local filesystem enumeration, target files inspection, data exfiltration, and file encryption.

File enumeration prompts
File enumeration prompts
Source: ESET

The researchers also mention data destruction functionality but the feature has not been implemented.

For file encryption, PromptLock uses the lightweight SPECK 128-bit algorithm, a rather unusual choice for ransomware, considered suitable mainly for RFID applications.

PromptLock's encryption logic
PromptLock’s encryption logic
Source: ESET

Just a demo for now

ESET told BleepingComputer that PromptLock has not appeared in their telemetry, but rather they discovered it on VirusTotal.

The cybersecurity company believes that PromptLock is a proof-of-concept or work in progress, and not an active ransomware in the wild.

Furthermore, several signs indicate that this is a concept tool rather than a real threat at presen. Some clues suggesting that include using a weak encryption cipher (SPECK 128-bit), a hard-coded Bitcoin address linked to Satoshi Nakamoto, and the fact that the data destruction capability has not been implemented.

After ESET published details about PromptLock, a security researcher claimed that that the malware was their project and somehow it got leaked.

Still, the appearance of PromptLock holds significance in demonstrating that AIs can be weaponized in malware workflows, offering cross-platform capabilities, operational flexibility, evasion, and lowering the bar for entry into cybercrime.

This evolution became evident in July, when Ukraine’s CERT reported the discovery of the LameHug malware, an LLM-powered tool that uses Hugging Face API and Alibaba’s Qwen-2.5-Coder-32B to generate Windows shell commands on the fly.

LameHug, believed to be deployed by Russian hackers of the APT28 group, leverages API calls instead of PromptLock’s proxying. Both implementations achieve the same practical result, though the latter is more complex and risky.

46% of environments had passwords cracked, nearly doubling from 25% last year.

Get the Picus Blue Report 2025 now for a comprehensive look at more findings on prevention, detection, and data exfiltration trends.



Source link

.........................

National Cyber Security

FREE
VIEW