Arabic Arabic Chinese (Simplified) Chinese (Simplified) Dutch Dutch English English French French German German Italian Italian Portuguese Portuguese Russian Russian Spanish Spanish
| (844) 627-8267

ChatGPT Makes Hacking Easier For Cybercriminals – channelnews | #hacking | #cybersecurity | #infosec | #comptia | #pentest | #hacker


In a further step towards ChatGPT bringing down mankind so computers can become our dystopian overlords, it turns out the ever-growing AI technology is rather skilled at churning out malware.

A recent report from security firm CyberArk explains ChatGPT has a knack for developing malicious malware that can mess up your hardware bigtime.

That means the new AI-powered tool could take cybercrime to another level, with the chatbot creating more complex malware than we can really envision – and isn’t widely known about at this point of its evolution.

CyberArk researchers reveal that code written with the aid of ChatGPT displayed “advanced capabilities” that could “easily evade security products” – a malware subcategory known as “polymorphic”.

A polymorphic virus – also known as a metamorphic virus – is a breed of malware programmed to mutate its appearance or signature files repeatedly via decryption routines.

This renders traditional cybersecurity tools that rely on signature-based detection, like antivirus or antimalware solutions, unable to recognise and then block the danger.

In short, this malware can cryptographically shapeshift around regular security mechanisms intended to detect malicious file signatures.

ChatGPT is meant to have filters that stop the creation of malware, but researchers have been able to overcome them by insisting it follow the prompter’s orders. Basically, they bullied the platform into doing what they say, which others have seen when trying to produce toxic content with the chatbot.

CyberArk researchers simply pestered ChatGPT into displaying code for the purpose of malicious programming, that they could then use to create complex, defence-evading scenarios, making hacking easier for amateur cybercriminals.

“The use of ChatGPT’s API within malware can present significant challenges for security professionals,” say CyberArk. “It’s important to remember, this is not a hypothetical scenario but a very real concern.

——————————————————–


Click Here For The Original Story From This Source.

National Cyber Security

FREE
VIEW