Is Vibe Hacking the Future of Cybercrime? Why AI Cyberattacks Still Need a Hacker’s Touch | #cybercrime | #infosec


Artificial intelligence, and in particular, the large language models that have become ubiquitous, are already being weaponized by cybercriminals. The threat is not a future concern but a present reality. Attackers have been seen using AI-powered phone calls to target users, and it is now reported that over half of all spam—51 percent—is generated by AI. Deepfakes have also emerged as a significant cybersecurity issue. This has led many to question just how far the AI cyberattack threat has advanced, and whether the cybersecurity world is on the cusp of an era of “vibe hacking” and fully autonomous attacks.

“Centre for Police Technology” Launched as Common Platform for Police, OEMs, and Vendors to Drive Smart Policing

The Limits of AI in Hacking

According to Michele Campobasso, a senior security researcher at Forescout, “the current reality of AI’s hacking capabilities is far less advanced than the hype might suggest”. In an analysis conducted between February and April 2025, Campobasso’s team tested over 50 AI models against a range of cybersecurity tasks. The results were telling. Open-source LLMs were found to be “unsuitable even for basic vulnerability research,” while even criminal, underground models struggled with usability issues, including poor output formatting and unstable behavior. While commercial models performed the best, only three out of 18 were able to generate a working exploit for the most difficult test cases.

AI’s Real Role in the Underground

Despite the dramatic headlines about autonomous attacks, Campobasso found “no clear evidence of real threat actors” using LLMs for complex, end-to-end cyberattacks. Instead, AI’s current application is far more practical and language-focused. According to the analysis, threat actors are leveraging AI for tasks where language is more critical than code, such as crafting sophisticated phishing campaigns, executing influence operations, or generating boilerplate components for malware. The study concluded that LLMs are inconsistent and still require “substantial user guidance” to complete exploit development tasks.

Algoritha: The Most Trusted Name in BFSI Investigations and DFIR Services

Fundamentals of Defense Remain Unchanged

The findings offer a dose of reality to an increasingly hyped landscape. While the age of “vibe hacking” is approaching, it is not moving as quickly as the “vibe coding” phenomenon that has accelerated software development. For cybersecurity defenders, the core principles of their work are not fundamentally changing. An AI-generated exploit, Campobasso stated, is “still just an exploit, and it can be detected, blocked, or mitigated by patching.” He added that the overly “confident tone” of AI models, even when incorrect, could pose a new risk by misleading inexperienced attackers who rely on them. The message for defenders is clear: while AI is making attacks more scalable, the foundational work of cybersecurity remains the same.



Source link

——————————————————–


Click Here For The Original Source.

.........................

National Cyber Security

FREE
VIEW