Anthropic—the company behind the widely renowned coding chatbot, Claude—says it uncovered a large-scale extortion operation in which cybercriminals abused Claude to automate and orchestrate sophisticated attacks.
The company issued a Threat Intelligence report in which it describes several instances of Claude abuse. In the report it states that:
“Cyber threat actors leverage AI—using coding agents to actively execute operations on victim networks, known as vibe hacking.”
This means that cybercriminals found ways to exploit vibe coding by using AI to design and launch attacks. Vibe coding is a way of creating software using AI, where someone simply describes what they want an app or program to do in plain language, and the AI writes the actual code to make it happen.
The process is much less technical than traditional programming, making it easy and fast to build applications, even for those who aren’t expert coders. For cybercriminals this lowers the bar for the technical knowledge needed to launch attacks, and helps the criminals to do it faster and at a larger scale.
Anthropic provides several examples of Claude’s abuse by cybercriminals. One of them was a large-scale operation which potentially affected at least 17 distinct organizations in just the last month across government, healthcare, emergency services, and religious institutions.
The people behind these attacks integrated the use of open source intelligence tools with an “unprecedented integration of artificial intelligence throughout their attack lifecycle.”
This systematic approach resulted in the compromise of personal records, including healthcare data, financial information, government credentials, and other sensitive information.
The primary goal of the cybercriminals is the extortion of the compromised organizations. The attacker created ransom notes to compromised systems demanding payments ranging from $75,000 to $500,000 in Bitcoin. But if the targets refuse to pay, the stolen personal records are bound to be published or sold to other cybercriminals.
Other campaigns stopped by Anthropic involved North Korean IT worker schemes, Ransomware-as-a-Service operations, credit card fraud, information stealer log analysis, a romance scam bot, and a Russian-speaking developer using Claude to create malware with advanced evasion capabilities.
But the case in which Anthropic found cybercriminals attack at least 17 organizations represents an entirely new phenomenon where the attacker used AI throughout the entire operation. From gaining access to the target’s systems to writing the ransomware notes—for every step Claude was used to automate this cybercrime spree.
Anthropic deploys a Threat Intelligence team to investigate real world abuse of their AI agents and works with other teams to find and improve defenses against this type of abuse. They also share key findings of the indicators with partners to help prevent similar abuse across the ecosystem.
Anthropic did not name any of the 17 organizations, but it stands to reason we’ll learn who they are sooner or later. One by one, when they report data breaches, or as a whole if the cybercriminals decide to publish a list.
Data breaches of organizations that we’ve given our data to happen all the time, and that stolen information is often published online. Malwarebytes has a free tool for you to check how much of your personal data has been exposed—just submit your email address (it’s best to give the one you most frequently use) to our free Digital Footprint scanner and we’ll give you a report and recommendations.
Click Here For The Original Source.