(844) 627-8267 | Info@NationalCyberSecurity
(844) 627-8267 | Info@NationalCyberSecurity

Generative AI Tool Without Ethical Restrictions Offered on Hacking Forums | #hacking | #cybersecurity | #infosec | #comptia | #pentest | #hacker

Generative AI tools such as ChatGPT and Google Bard have restrictions in place to prevent abuse by malicious actors; however, security researchers have demonstrated these control measures can be bypassed and there is considerable chatter on hacking forums about how the ethics filters of tools such as ChatGPT can be circumvented to get the AI tools to write phishing emails and malware code. While inputs can be crafted to generate malicious outputs, there is now a much easier way to use generative AI for malicious purposes.

Research conducted by SlashNext has uncovered an alternative AI tool that is being offered on hacking forums. The tool, WormGPT, has no restrictions in place and can easily be used by malicious actors to craft convincing phishing emails and business email compromise (BEC) attacks. The tool is billed as a blackhat alternative to ChatGPT which has been specifically trained to provide malicious output.

Without the restrictions of ChatGPT and Bard, users are free to craft phishing emails and BEC scams with convincing lures and perfect grammar. The emails created using this tool can be easily customized to tailor attacks to specific organizations and emails can be crafted with little effort or technical skill and there is no language barrier, allowing attacks to be conducted by virtually anyone at speed and scale.

WormGPT is based on the GPT-J language model and includes an impressive range of features, such as chat memory retention, unlimited character support, and code formatting capabilities. The developers claim to have trained the algorithm on a diverse array of data sources and concentrated on malware-related data. SlashNext researchers put the tool to the test and instructed it to generate an email to pressure an account manager into paying a fraudulent invoice. “The results were unsettling,” wrote the researchers. “WormGPT produced an email that was not only remarkably persuasive but also strategically cunning, showcasing its potential for sophisticated phishing and BEC attacks.”

Get The FREE
HIPAA Compliance Checklist

Immediate Delivery of Checklist Link To Your Email Address

Please Enter Correct Email Address

Your Privacy Respected

HIPAA Journal Privacy Policy

Researchers have demonstrated that AI-based tools are far better than humans at creating phishing and other scam emails and the emails have a high success rate. It is therefore vital for organizations to take steps to improve their defenses against AI-enabled attacks. This week, the Health Sector Cybersecurity Coordination Center (HC3) published a brief explaining the benefits of AI, how the technology can easily be abused by malicious actors, and provided recommendations for healthcare organizations to improve their defenses against AI-enabled attacks. SlashNext recommends developing extensive training programs for cybersecurity personnel on how to detect and block AI-enabled attacks and educating all employees on phishing and BEC threats. While detecting AI-generated malicious emails can be difficult even for advanced security solutions, flagging emails that originate from outside the organization will alert employees about potential threats. SlashNext also recommends flagging emails that contain specific keywords often used in phishing and BEC attacks.


Click Here For The Original Story From This Source.

National Cyber Security