Even as organizations of all sizes continue to adopt generative AI tools, there are growing concerns among IT security professionals that these tools will make organizations more vulnerable to attack.
Three-quarters of security professionals surveyed by Deep Instinct said they witnessed increased attacks over the past 12 months, with 85% attributing this rise to bad actors using generative AI.
The top three generative AI threat issues cited by survey respondents ranged from growing privacy concerns (39%) and undetectable phishing attacks (37%) to an overall increase in the volume and velocity of attacks (33%).
A little less than half (46%) of respondents listed ransomware as the greatest threat to their organization’s data security and almost half (47%) of respondents said they now have a policy to pay the ransom, up from 34% in 2022.
Scott Gerlach, CSO and co-founder at StackHawk, said organizations have been struggling to design and implement secure code practices, and the power behind AI and large language models (LLMs) requires a new level of responsibility when developing secure code.
“Generative AI can help developers write code faster, but if they don’t understand the code they are copying and pasting, it’s easy to pull in vulnerable code unintentionally. That makes the need for a strong AppSec program and tooling that can identify the [OWASP] LLM Top 10 a priority,” he said.
He added that security resources were already strained before generative AI entered the chat, and that intoduces additional challenges.
“With the influx of API development and speed of software delivery, security can’t keep up; their engineering counterparts outnumber them–hence the motivation to shift security left,” he said.
From Gerlach’s perspective, generative AI only reinforces the motivation to adopt a shift left strategy as a strong AppSec program and continuous testing of running applications becomes even more crucial than before.
“By enabling developers to identify and fix potential vulnerabilities in their code earlier in the software development life cycle, security resources can be freed to focus on strategic proactive measures instead of constantly playing catch up,” he said.
The study also revealed nearly seven in 10 have already adopted generative AI tools within their organization. The finance sector leads the pack with nearly 80% of respondents saying they’ve adopted generative AI tools.
Seventy percent of security professionals surveyed said generative AI is positively impacting employee productivity and collaboration, and nearly two-thirds (63%) said they feel the tech is also improving employee morale.
Adam Gavish, CEO and co-founder of DoControl, noted that generative AI apps are becoming increasingly more common in businesses’ daily practices as these tools enable employees to be more effective in their roles.
“However, our research team recently discovered that 24% of AI apps require risky OAuth permissions, significantly increasing the liability organizations take on and the workloads for security teams to govern these apps,” he cautioned.
Coordinating these generative AI tools and executing proper security measures can also be time-consuming and resource-intensive for security teams, particularly when dealing with tools that can gain access to a vast amount of data.
“The process becomes even more intricate when trying to strike a balance between minimizing disruption to users’ workflows and enforcing proper data security measures,” Gavish said.
Patrick Harr, CEO at SlashNext, agreed that generative AI security tools are proving to be an important defense against the growing use of generative AI for cybercriminals to deliver BEC and malware attacks into the organization.
He noted that there is a cybersecurity talent shortage and more threats than ever due to technological advancements.
“The adoption of AI and automation security tools are the only way cybersecurity professionals can meet the security demands of the organization,” he said.
Recent Articles By Author