Salt Security Applies Generative AI to API Security | #hacking | #cybersecurity | #infosec | #comptia | #pentest | #ransomware

Salt Security this week revealed it has embedded a generative artificial intelligence (AI) assistant, dubbed Pepper, into its application programming interface (API) security platform.

Pepper provides a natural language interface through which cybersecurity teams can launch queries to discover, for example, how to configure a platform without having to read hundreds of pages of documentation. The tool is based on a large language model (LLM) that the company has trained using its own product documents. Using an interface accessed via an existing Salt Security dashboard, Pepper could summarize the information in natural language, reducing the amount of time cybersecurity teams would otherwise need to spend navigating documentation.

Eric Schwake, director of cybersecurity strategy for Salt Security, said Pepper is the first in a series of forthcoming investments in AI the company will be adding to the platform in the months ahead.

In general, generative AI advances in the realm of cybersecurity are arriving in two phases. The first phase typically involves using generative AI to streamline support functions, letting people use natural language to query a corpus of data. Salt Security claims Pepper can decrease by as much as 91% the time it takes to surface actionable information.

The next step is to orchestrate multiple natural language prompts to automate tasks. Rather than having to rely on a security automation framework, that approach provides cybersecurity teams (the ones with prompt engineering skills) with a less costly approach to automation.

In the long term, as the reasoning capabilities of LLMs continue to advance, the types of tasks that can be automated will become more complex. The challenge right now is that the more advanced the reasoning engine is, the more parameters the LLM that automates those tasks require. Of course, as the LLM increases in size so too do the costs of running queries against it.

It’s not clear at what rate cybersecurity teams are embracing AI. However, the ability to use generative AI launch queries via a natural language interface is rapidly becoming table stakes. The race is on to extend those capabilities to enable cybersecurity teams to automate tasks in a way that ensures sensitive data isn’t inadvertently used to train the next iteration of a general-purpose LLM that anyone can access.

In the meantime, cybersecurity teams should start crafting a generative AI strategy. Many manual cybersecurity tasks today create levels of drudgery that make the work tedious. Eliminating the tedious tasks won’t reduce the need for cybersecurity professionals; rather, an existing teams could to investigate more security incidents and respond more adroitly to cyberattacks — and we know those are only going to increase in both volume and sophistication as cybercriminals also embrace AI technologies.

At this juncture, the debate arguably now is not so much about whether cybersecurity teams will use AI as much as it is how soon and to what degree.

Recent Articles By Author


Click Here For The Original Source.

National Cyber Security