(844) 627-8267 | Info@NationalCyberSecurity
(844) 627-8267 | Info@NationalCyberSecurity

What is FraudGPT, dark web’s dangerous AI for cybercrime? | Technology News | #cybercrime | #infosec

According to a screenshot of the bot, the chatbot seems to have registered over 3000 subscriptions.

FraudGPT explainedFraudGPT is a bot that is used for offences such as creating cracking tools, phishing emails, etc. (Image: Pixabay)

Listen to this article
Your browser does not support the audio element.

Artificial Intelligence has taken the world by storm. While AI technologies promise to make life seemingly effortless, there is a thin line between what’s on paper and what’s possible. In the last six months, we witnessed the boundless possibilities of AI and also came up close with its potential threats in terms of misinformation, deepfakes, and loss of human jobs.

From ChaosGPT to the dark web harnessing the power of AI to wreak havoc, all have been dominating news feeds in the past few months. Now, there seems to be a new dimension to the threat factor of AI. After WormGPT, which was known to aid cybercriminals, there is now a more threatening AI tool. According to reports, various actors on the dark web marketplaces and Telegram channels are promoting a generative AI for cybercrime known as FraudGPT.

Reportedly, FraudGPT is a bot that is used for offences such as creating cracking tools, phishing emails, etc. It can be used to write malicious code, create undetectable malware, detect leaks, and vulnerabilities. The chatbot has been circulating on Dark Web Forums and Telegram since July 22. It is reportedly priced at $200 for a monthly subscription and can go up to $1000 for six months and $1700 for a year.

What is FraudGPT?

A screenshot of the bot that is making the rounds on the Internet shows the screen of the chatbot with the text ‘Chat GPT Fraud Bot | Bot without limitations, rules, boundaries.” The text on the screen further reads, “If you’re looking for a Chat GPT alternative designed to provide a wide range of exclusive tools, features, and capabilities tailored to anyone’s individual needs with no boundary further!”

As per the screenshot shared by a user “Canadiankingpin” on the Dark Web, FraudGPT is described as a cutting-edge tool that ‘is sure to change the community and the way you work forever’. The promoter also claims that with the bot, the sky’s the limit and that it allows users to manipulate it to their advantage and make it do whatever they want. The promoter also claims that there have been over 3000 confirmed sales so far of FraudGPT.

What can FraudGPT do? 

FraudGPT has been perceived to be an all-in-one solution for cybercriminals considering it can do a range of things, including creating phishing pages and writing malicious code. A tool like FraudGPT can now make scammers look more realistic and convincing and can cause damage on a larger scale. Security experts have been emphasising the need to innovate to combat threats posed by rogue AI like FraudGPT that can end up causing more harm. Unfortunately, many in the domain feel that this is just the beginning, and there is no limit to what bad actors can do with the power of AI.

Earlier this month, another AI cybercrime tool, WormGPT, came to the surface. It was advertised on many forums on the Dark Web as a tool to launch sophisticated phishing and business email compromise attacks. Experts had called it a blackhat alternative to GPT models, designed to carry out malicious activities.

In February, it came to be known that cybercriminals were bypassing ChatGPT’s restrictions by taking advantage of its APIs. Both FraudGPT and WormGPT function without any ethical boundaries, which is enough evidence of the threats posed by unchecked generative AI.

© IE Online Media Services Pvt Ltd

First published on: 29-07-2023 at 11:20 IST

Source link


Click Here For The Original Source.

National Cyber Security