Anthropic Says Bad Actors Have Now Turned To ‘Vibe Hacking’ | #cybercrime | #infosec






Vibe coding has become one of the biggest buzzwords in AI in recent months. Being able to lean on a large language model can be helpful, because it speeds up coding by letting AI handle the brunt of the legwork. But it’s not all good news, as Anthropic — the company behind Claude — says that “vibe hacking” is becoming an increasingly popular way to extort victims.

In a new report shared this week, the Claude developer says that threat actors have started adapting their operations to take advantage of the benefits AI offers. For starters, the company says that AI has “lowered the barrier to sophisticated cybercrime.” It then expanded on this by noting how agentic AI has become a tool for threat actors to leverage against victims, especially with the glut of AI agents we’ve seen debut in recent months. Even Anthropic released a new Claude agent for Chrome this week.

AI also speeds up the process for cybercriminals as they profile their victims, analyze data they’ve stolen, and even create new identities to use in their schemes. And, it seems, some hackers are going beyond just using AI to simplify their attacks, as Anthropic says it recently disrupted a threat actor who was using AI to “an unprecedented degree.”

The illicit art of vibe hacking


Much like vibe coding, which lets AI do the heavy lifting while coding, vibe hacking puts the onus on the AI. The threat actor Anthropic says it disrupted was using Claude Code to run their operation. Beyond using AI for finding targets and sending them messages, this hacker even let Claude determine what data was worth extracting, as well as how much the data should be ransomed for.

The company says that this move “represents an evolution in AI-assisted cybercrime. Agentic AI tools are now being used to provide both technical advice and active operational support for attacks that would otherwise have required a team of operators. This makes defense and enforcement increasingly difficult, since these tools can adapt to defensive measures, like malware detection systems, in real time.”

Further, Anthropic says it believes that this is the first of many vibe hacks to come, and it has built a detection system to help find bad actors abusing its tools in this way. As AI continues to spread and new vibe-coding apps come out, it’s likely we’ll see these kinds of attacks become more commonplace, raising even more questions about how Trump’s AI Action Plan will regulate the use of AI. After all, we’ve already seen hackers use AI to break AI, so it really shouldn’t be surprising that they’re using it to try to break into our wallets, too.





Source link

——————————————————–


Click Here For The Original Source.

.........................

National Cyber Security

FREE
VIEW