(844) 627-8267 | Info@NationalCyberSecurity
(844) 627-8267 | Info@NationalCyberSecurity

Generative AI — A Creator of Malware or Defender of Cybersecurity? | #hacking | #cybersecurity | #infosec | #comptia | #pentest | #ransomware

When OpenAI launched its generative AI model ChatGPT in November 2022, its capabilities floored millions of users. As the novelty begins to wear off, there are more questions than answers about the possible dangers generative AI capabilities pose to cybersecurity, as is with any emerging technology.

History has shown us that whenever there’s a new technology to be leveraged to do good, attackers aren’t far behind using the same technology for ill intentions. Attempts to leverage ChatGPT for cybercriminal activity may already be underway.

On December 29, 2022, a thread named “ChatGPT – Benefits of Malware” appeared on a popular underground hacking forum. The publisher of the thread disclosed that he was experimenting with ChatGPT to recreate malware strains and techniques described in research publications and write-ups about common malware. For example, he shared the code of a Python-based stealer that searches for common file types, copies them to a random folder inside the Temp folder, ZIPs them and uploads them to a hardcoded FTP server.

As cybercriminals advance their attack methods using generative AI, we must also question whether the security community can get ahead of adversaries and leverage this technology to strengthen security.

Generative AI is far from replacing security engineers, but it can reduce some labour-intensive and complex work. Much like everything surrounding ChatGPT, security defenders are beginning to find ways to leverage this technology to foster better cybersecurity. Security research, for instance, is an area where large language models, (LLMs) can be effectively leveraged. Here are four ways generative AI can be leveraged by security defenders.

Reverse engineering

Reverse engineering is a crucial aspect of security research and demands vast amounts of knowledge and practice to perfect. Generative AI can reduce some of the complexities reverse engineers face every day.

Let’s consider Ghidra, a favourite tool in the security community. It automates reverse engineering tasks like disassembling a binary into its assembly language listing, reconstructing its control flow graph and decompiling it into something resembling source code in the C programming language. Ghidra is invaluable but when integrated with generative AI, it can simplify the process even further.

A reverse engineer would ideally have to meticulously go through the decompiled code that Ghirda generates and add explanatory comments for what the code does. Generative AI can add another layer of automation. It can add explanations of what the function does along with suggestions for descriptive variable names, enabling reverse engineers to have a high-level view of the code’s functionality without having to go through every single line of code.

These tools aren’t foolproof but the advantage is that the suggestions are easily checked against both the decompiler output and the assembly listing from which that output was derived.

Debugging code

Debugging code is an area of reverse engineering that takes considerable effort to master and naturally, generative AI can reduce this complexity by providing an interactive tool for exploring the debugging context. As generative AI is fed information on registers, stack values, backtrace, assembly and decompiled code, it adds relevant context to the reverse engineer’s queries. If the solution supports language models from Anthropic and OpenAI, it allows security researchers to analyse debugging information and answer questions about runtime state or assembly code.

For instance, researchers can pose questions ranging from general queries like “What’s going on here?” to more specific questions like “Do you see any security vulnerabilities? If so, how can they be exploited?” This offers a conversational approach to debugging as opposed to the reverse engineer having to manually debug the code.

Web app security

Web applications are a huge challenge for researchers because of the sheer complexities involved in identifying vulnerabilities within them. One of the most powerful tools used for web app security is the Burp Suite. When integrated with ChatGPT, its capabilities can reduce manual testing, automate security testing for web app developers and help identify new exploits.

These capabilities of generative AI can have dramatic effects on reducing manual testing and automating security testing for web app developers.

Increasing visibility into cloud-based tools

When it comes to cloud security, misconfigurations in identity and access management (IAM) are one of the most common concerns for organisations and are far too often overlooked. Globally, 800 million records were exposed due to cloud misconfigurations in 2022 alone. When powerful cloud security solutions are integrated with generative AI, they can be used to retrieve all IAM policies associated with users or groups.

Generative AI can help identify potential escalation opportunities and any relevant mitigations. It can identify complex scenarios of privilege escalation based on non-trivial policies through multi-IAM accounts. This means security researchers can identify potential attack paths and compartmentalise them based on severity and begin mitigation efforts.

As LLMs like ChatGPT and GPT-4 continue to evolve, we must anticipate that cyber adversaries will leverage it for their benefit — from more convincing and linguistically accurate phishing emails to creating malicious code — attackers will look to abuse the technology to provide tainted data or trick the models into disclosing sensitive data.

But there’s light at the end of the tunnel. Security teams can use the technology to shore up their defences. Generative AI could be used for log parsing, anomaly detection, triaging and incident response.

Security professionals can also use it to reduce the manual workload and prove critical to reverse engineers, aid development teams in static code analysis and identify potentially-exploitable code.

Coupled with advanced threat detection and intelligence from trained AI models, defenders can heighten security and make it more difficult for cybercriminals to breach their networks. While its use cases are only being discovered, generative AI can improve cybersecurity and be a crucial tool in a security team’s arsenal.



Views expressed above are the author’s own.



Click Here For The Original Source.

National Cyber Security