WormGPT, a black-hat-based tool has been recently launched by cybercriminals and has the potential to conduct various social engineering as well as Business Email Compromise (BEC) attacks. This tool has no limitations towards its use and has no boundaries.
The use of generative AI has seen a remarkable reach in recent times. With the release of ChatGPT in November 2022, there have been several AI tools created and refined for multiple purposes. However, here comes a time in which a new AI has been released specifically designed for Black Hats.
Business email compromise, commonly referred to as CEO fraud or whaling, attacks businesses by impersonating senior executives or reliable partners.
BEC Attacks Revolutionised by WormGPT
As per reports, threat actors have been using ChatGPT and other AI-based tools for generating malicious email that seems legitimate enough to convince an employee in giving sensitive information.
In a forum of cybercriminal discussions, there has been evidence that threat actors rely on ChatGPT for composing BEC emails. Even hackers with low fluency in other languages can use these AI-generative emails for conducting such attacks.
Another discussion mentioned “Jailbreaks” for tools like ChatGPT. These are specially crafted prompts that can make ChatGPT give out sensitive information beyond the scope of its use. It can even provide inappropriate content or generate harmful code.
WormGPT was also found on a cybercriminal discussion forum, which was mentioned to be specially designed as a blackhat alternative to other GPTs. It is designed with GPTJ (Generative Pre-trained Transformer-J) language models with a range of features and code formatting capabilities.
In an experiment conducted with WormGPT where it was asked to generate a BEC email for pressurizing an account manager for paying a fraudulent invoice. The results were extremely harmful since they generated a convincing, grammatical error-free, and persuasive email which would convince any employee.
It is recommended for organizations train their employees about these kinds of phishing emails and have appropriate email filters in place for preventing such AI-generative email-based attacks.