Info@NationalCyberSecurity
Info@NationalCyberSecurity

WormGPT could see online scams get far more sophisticated than Nigerian princes | #datingscams | #russianliovescams | #lovescams | #datingscams | #love | #relationships | #scams | #pof | #match.com | #dating


Step aside ChatGPT, there’s a new generative artificial intelligence model in town that poses more immediate risks than job snatching and could drive the next surge in cybercrime.

Described as ChatGPT’s “evil cousin”, WormGPT is one of several Johnny-come-lately generative AI model prototypes that have sprung up from the depths of the dark web. But, unlike Open AI’s tool, it has been designed specifically for the malicious, mass deployment of hacking, spamming and disinformation – allowing bad actors to more accurately mimic the real deal in attempts to swindle and deceive people.

Romance scams could get more sophisticated.

While this means we can probably kiss goodbye to the typo-ridden scam email, there’s no reason to celebrate. WormGPT means cyberattacks are about to get more sophisticated, turning online crooks into computerised chameleons who adeptly target their unsuspecting victims.

While hacking and scamming are nothing new, previous attempts were often easier to spot through their poor spelling, grammar and formatting. For decades, most “spray and pray” spammers have been automatically blocked by spam filters.

WormGPT can design more advanced, targeted, and personalised phishing attacks, built with the ability to imitate writing styles and convincingly tailor copy for the specific person or entity it is trying to deceive.

Loading

These attempts can be further personalised by supplying the model with previous email samples and social media posts to mimic the writing style of real people or organisations. Attackers can also obtain images of everyday people posted to social media and customise them according to the scam’s context to make their story even more convincing.

These techniques, coupled with existing and rapidly proliferating AI-generated voice, speech, video and conversational style will make it harder to tell between the real and the fake.

Just imagine what an effective romance scam that could create. With AI, what originated as badly penned declarations of love from princes in foreign lands are now often indistinguishable from online interactions with a real person.

Click Here For The Original Source.

. . . . . . .

—————————————————-


Source link

National Cyber Security

FREE
VIEW