(844) 627-8267 | Info@NationalCyberSecurity
(844) 627-8267 | Info@NationalCyberSecurity

The State of AI and Cybersecurity in 2023  | #hacking | #cybersecurity | #infosec | #comptia | #pentest | #ransomware


As the old expression goes, “speed kills,” and the realm of cybersecurity is no different. AI cyber attacks have the potential to enable hackers to work faster to break into networks and find critical data assets before security analysts have a chance to spot them.

Unfortunately, AI-driven attacks aren’t a science fiction imagination but are a reality that security teams are confronting on a daily basis.

Cyber security vendor Beyond Identity recently commissioned a survey of 1,010 cybersecurity specialists on AI-assisted cyberattacks and found that 75% said that AI use in cyberattacks is on the rise, with one in six respondents reportedly experiencing an AI-fueled cyberattack.

Now with generative AI adoption increasing following the launch of ChatGPT last November, there is the potential for the threat landscape to become a lot more complex.

The State of AI in Cyber Attacks in 2023

While the conversation about AI being used in cyberattacks has been growing for years, the rapid development of large language models (LLMs) and generative AI and the release of ChatGPT last year has led to growing concerns over the risks of AI-led cyber attacks.

For instance, earlier this year, Europol issued a warning about the criminal use of ChatGPT and other LLMs, while NSA cybersecurity director Rob Joyce warned companies to “buckle up” for the weaponization of generative AI.

So far, it’s difficult to calculate the exact impact of generative AI and ChatGPT on the cyber threat landscape, but there does appear to be some evidence of an uptick in threat activity.

Between January to February 2023, Darktrace researchers have observed a 135% increase in “novel social engineering” attacks, corresponding with the widespread adoption of ChatGPT.

How Generative AI Can Be Used for Bad

There are a number of ways that threat actors can use LLMs maliciously, from generating phishing emails and social engineering scams to generating malicious code, malware, and ransomware.

“There are many ways to use AI in a cyberattack, of which two prominent ones are generating phishing emails and exploiting code vulnerabilities,” Robert Blumofe, EVP & CTO at Akamai told Techopedia.

“In the case of phishing, large-language model (LLM) tools (e.g., ChatGPT, can generate targeted and personalised phishing emails that convincingly appear to come from someone the victim knows and trusts. In the case of code vulnerabilities, these tools can scan code, find vulnerabilities, and then craft new code that attacks those vulnerabilities.”

While proof-of-concept exists for creating malicious code, the most immediate threat appears to be the ability of tools like ChatGPT to create phishing emails.

In a matter of minutes, users can jailbreak an LLM and enter a prompt requested for an email to trick unsuspecting users into clicking on a link to a phishing website or a compromised attachment which puts their personal information at risk.

At the same time, state-sponsored threat actors in countries like Russia and North Korea can use generative AI to generate scam emails in English without grammatical or spelling errors, not only helping to bypass spam filters but also crafting scams that are more likely to trick the victim.

With phishing emails, generative AI is particularly problematic because it allows non-native English-speaking cybercriminals to get around having poor English skills to send messages with better grammar that are not only able to side-step spam filters but are also more likely to convince users into handing over personal information.

Although providers like OpenAI and Anthrophic are trying to implement guardrails to prevent LLMs from being used for malicious purposes, these haven’t proved effective and are unlikely to do so for the foreseeable future.

Using AI for Good

However, as concern over AI-generated threats rises, more and more organizations are also looking to invest in technology to protect against the next generation of cybercrime.

Sky Quest estimates that AI in the cybersecurity market will reach a whopping $94.3 billion by 2030, growing at a CAGR of 24.42%. CEPs offers a more conservative estimate that the market will reach $46.3 billion by 2027, highlighting that there’s plenty of room for growth in the next few years.

Investment in defensive AI is on the rise because these solutions offer security teams a way to decrease the time taken to identify and respond to data breaches and release the amount of manual administration needed to make a security operation center (SOC) function.

The latter point is particularly important when considering that most organizations are feeling the effects of a cyber workforce gap of 3.4 million professionals.

In traditional non-automated environments, security analysts use a patchwork of tools to manually monitor and analyze threat data collected across on-premise and cloud systems, which they need to use to not only identify vulnerabilities but any signs of harmful activity that could indicate a cyber attack.

It’s a thankless and exhaustive process, as the volume of data generated across an enterprise-grade network is too large for a human security team to keep track of alone.

Thus AI provides security teams with a solution to automate tasks from threat hunting to malware analysis, vulnerability detection, network inventorying, phishing email containment, or even workflows themselves.

Now with tech vendors like Microsoft, Google, and SentinelOne launching their own LLM-based products to help organizations detect threats, there are new defensive use cases for generative AI emerging all the time.

A Look at LLMs Entering Cybersecurity

One of the biggest developments in cybersecurity AI came back in April when Google announced the launch of SEC-PaLM, an LLM designed specifically for cybersecurity use cases, which can process threat intelligence data to offer detection and analytics capabilities.

This launch resulted in the development of two interesting tools; VirusTotal Code Insight, which can analyze and explain the behavior of scripts to help users identify malicious scripts, and Breach Analytics for Chronicle, which automatically alerts users about active breaches in the environment alongside contextual information so they can follow up.

Likewise, Microsoft Security Copilot uses GPT4 to process threat signals taken from across a network and generates a written summary of the potentially malicious activity so that human users can investigate further.

While these are just two niche products using LLMs in a security context, more broadly, they generate the role that LLMs have to play in the defensive landscape as a tool to reduce administrative burdens and enhance a user’s contextual understanding of active threats.

Who Does It Better?

Whether AI ends up being a net positive or net negative for the threat landscape, in the long run, will come down to who does it better; the attackers or the defenders.

If defenders aren’t prepared for a rise in automated cyber attacks in the future, then they will be vulnerable to exploitation, but organizations that embrace these technologies to optimize their SOCs not only have the option to stave off these threats but can also automate the less-rewarding manual work in the process.

——————————————————-


Click Here For The Original Source.

National Cyber Security

FREE
VIEW