(844) 627-8267
(844) 627-8267

Outlaw AI chatbots make cybercrime easier and more frequent | #cybercrime | #infosec


ChatGPT might be known to plagiarize an essay, but its rogue counterparts are doing quite worse.

Duplicate chatbots with criminal capabilities are surfacing on the dark web and — much like with ChatGPT — they can be accessed for a rather modest monthly subscription or one-time fee.

These language learning models, as they’re technically known, essentially serve as a nefarious, artificially intelligent tool chest that greatly aids in facilitating sophisticated and illegal online schemes.

A few of the dark web chatbots, DarkBERT, WormGPT and FraudGPT — the last of which goes for $200 a month or $1,700 annually — have recently caught the attention of cybersecurity firm SlashNext. They were flagged for the dangerous potential to create phishing scams and phony texts with images that are remarkably believable.

Nefarious chatbots are becoming increasingly accessible, inspiring more people to commit cyber crimes.

The company found evidence that DarkBERT has illicitly sold “.edu” suffixed email accounts at $3 apiece to make a con artist’s message appear to come from an academic institution. Those are also being used to wrongfully reap the benefits of student deals and discounts on marketplaces like Amazon.

Another grift made possible with FraudGPT allows crooks to solicit their victim’s bank credentials by posing as a legitimate source, such as an email that appears to come from a customer’s bank of choice.

These sorts of swindles aren’t anything new, but they are now more accessible than ever with high-powered AI, warns Lisa Palmer, chief AI strategist for the consulting firm AI Leaders.

ChatGPT imposters are showing up on the dark web and making it easier for criminals to operate.

“This is about crime that can be personalized at a massive scale. They can create campaigns that are highly personalized for thousands of targeted victims versus having to create one at a time,” she told The Post, adding that the creation of fraudulent, deepfake video and audio that could compromise a person’s reputation is mere child’s play for AI to mock up.

Even more frightening, the targets of these harder-to-detect attacks are not only the elderly and those who are less than tech-savvy.

“Since [these kind of models] are trained across large amounts of publicly available data, they could be used to look for look for patterns and information that is shared about the government — a government that they are wanting to infiltrate or attack,” Palmer said. “It could be gathering information about specific businesses that would allow for things like ransom or reputation attacks.” 

The computerized character assassination could also open the door to a major crime that cyber security already struggles with defending.

Chatbots are being designed on the dark web so that users may pay a subscription to have it create scams.

“Think about things like identity theft and being able to create identity theft campaigns,” she said. “They are highly personalized at a massive scale. What you’re talking about here are taking crimes to an elevated level.”

Serving justice to those responsible for the outlaw LLMs isn’t something that comes easy, either.

“For those that are sophisticated organizations, it’s exceptionally hard to catch them,” Palmer said.

“On the other end of that, we also have these new criminals that are being emboldened by new language models because they make it easier for people without high-tech skills to enter illegal enterprises.”

Load more…


Copy the URL to share

Source link


Click Here For The Original Source.

National Cyber Security