The emergence of generative AI has paved the way for both positive and negative applications. OpenAI, the company behind ChatGPT, had previously warned about the potential misuse of these AI models. And now, cybercriminals are exploiting this technology to build malicious AI systems. With the rise of WormGPT and the latest arrival of FraudGPT, hacking and data theft can be automated.
Generative AI refers to AI systems built on transformer models, the concept initially introduced by Google many years ago. However, it is OpenAI’s work that has garnered significant attention, inspiring others to delve into generative AI. Unfortunately, this has also attracted the interest of malicious actors, who are now leveraging these models for their nefarious activities.
FraudGPT is being promoted on hacking forums by an anonymous individual. According to reports, the service claims to revolutionize online fraud operations. Like non-malicious AI applications, FraudGPT can be customized by hackers based on their needs. For instance, it can generate persuasive text blocks designed to deceive users into clicking on malicious spam SMS messages.
Unlike legitimate generative AI platforms, FraudGPT is not limited to harmless functionalities. It can create malicious code, a concerning capability considering the potential harm it can inflict. Although no code samples have been provided, the claim is not unfounded given the capabilities demonstrated by legitimate AI systems.
Furthermore, the fraudster behind FraudGPT also engages in trafficking stolen information, which can possibly be incorporated into the model. The creator boasts that the bot can scan websites to identify vulnerable targets for infiltration.
Pricing for FraudGPT is set at $200 per month, making it a lucrative venture for the developer who claims to have already made over 3,000 sales. In comparison, WormGPT was available for $60 per month. As a subscription-based service, the creator likely operates FraudGPT using a substantial number of GPUs.
This development serves as a reminder for users to remain vigilant. With cybercriminals utilizing generative AI for their malicious activities, it is essential to be cautious and discerning, particularly when encountering messages that do not exhibit the usual signs of a scam.