Three Ways Generative AI Can Bolster Cybersecurity | #hacking | #cybersecurity | #infosec | #comptia | #pentest | #ransomware

Human analysts can no longer effectively defend against the increasing speed and complexity of cybersecurity attacks. The amount of data is simply too large to screen manually.

Generative AI, the most transformative tool of our time, enables a kind of digital jiu jitsu. It lets companies shift the force of data that threatens to overwhelm them into a force that makes their defenses stronger.

Business leaders seem ready for the opportunity at hand. In a recent survey, CEOs said cybersecurity is one of their top three concerns, and they see generative AI as a lead technology that will deliver competitive advantages.

Generative AI brings both risks and benefits. An earlier blog outlined six steps to start the process of securing enterprise AI.

Here are three ways generative AI can bolster cybersecurity.

Begin With Developers

First, give developers a security copilot.

Everyone plays a role in security, but not everyone is a security expert. So, this is one of the most strategic places to begin.

The best place to start bolstering security is on the front end, where developers are writing software. An AI-powered assistant, trained as a security expert, can help them ensure their code follows best practices in security.

The AI software assistant can get smarter every day if it’s fed previously reviewed code. It can learn from prior work to help guide developers on best practices.

To give users a leg up, NVIDIA is creating a workflow for building such co-pilots or chatbots. This particular workflow uses components from NVIDIA NeMo, a framework for building and customizing large language models (LLMs).

Whether users customize their own models or use a commercial service, a security assistant is just the first step in applying generative AI to cybersecurity.

An Agent to Analyze Vulnerabilities

Second, let generative AI help navigate the sea of known software vulnerabilities.

At any moment, companies must choose among thousands of patches to mitigate known exploits. That’s because every piece of code can have roots in dozens if not thousands of different software branches and open-source projects.

An LLM focused on vulnerability analysis can help prioritize which patches a company should implement first. It’s a particularly powerful security assistant because it reads all the software libraries a company uses as well as its policies on the features and APIs it supports.

To test this concept, NVIDIA built a pipeline to analyze software containers for vulnerabilities. The agent identified areas that needed patching with high accuracy, speeding the work of human analysts up to 4x.

The takeaway is clear. It’s time to enlist generative AI as a first responder in vulnerability analysis.

Fill the Data Gap

Finally, use LLMs to help fill the growing data gap in cybersecurity.

Users rarely share information about data breaches because they’re so sensitive. That makes it difficult to anticipate exploits.

Enter LLMs. Generative AI models can create synthetic data to simulate never-before-seen attack patterns. Such synthetic data can also fill gaps in training data so machine-learning systems learn how to defend against exploits before they happen.

Staging Safe Simulations

Don’t wait for attackers to demonstrate what’s possible. Create safe simulations to learn how they might try to penetrate corporate defenses.

This kind of proactive defense is the hallmark of a strong security program. Adversaries are already using generative AI in their attacks. It’s time users harness this powerful technology for cybersecurity defense.

To show what’s possible, another AI workflow uses generative AI to defend against spear phishing — the carefully targeted bogus emails that cost companies an estimated $2.4 billion in 2021 alone.

This workflow generated synthetic emails to make sure it had plenty of good examples of spear phishing messages. The AI model trained on that data learned to understand the intent of incoming emails through natural language processing capabilities in NVIDIA Morpheus, a framework for AI-powered cybersecurity.

The resulting model caught 21% more spear phishing emails than existing tools. Check out our developer blog or watch the video below to learn more.

Wherever users choose to start this work, automation is crucial, given the shortage of cybersecurity experts and the thousands upon thousands of users and use cases that companies need to protect.

These three tools  — software assistants, virtual vulnerability analysts and synthetic data simulations — are great starting points for applying generative AI to a security journey that continues every day.

But this is just the beginning. Companies need to integrate generative AI into all layers of their defenses.

Attend a webinar for more details on how to get started.


Click Here For The Original Source.

How can I help you?
National Cyber Security