The Rise of AI Worms in Cybersecurity | #hacking | #cybersecurity | #infosec | #comptia | #pentest | #ransomware

[ad_1]

The swift advancement of generative AI systems like OpenAI’s ChatGPT and Google’s Gemini has brought about a new era of technological ease. A recent report from CyberArk illustrates the transformative impact of AI on cyber threats and security strategies. The report talks about the growing impact of AI-powered cyber-attacks, highlighting techniques such as session hijacking, the growing adoption of password-less access management by organizations, and the rise of business email compromise (BEC) attacks in the report, generally like ransomware attacks. In addition, as far as the prevalence of risks and lack of risk awareness in firms is concerned. The cyber threat landscape is expected to affect AI strategies if it’s as sophisticated as advanced phishing campaigns and requires organizations to be prepared for cyber threats. In this blog, we will discuss about rise of AI worms in cybersecurity.

Source: CyberArk

Rise of Generative AI Worms

Since the rise of generative AI programs like OpenAI’s ChatGPT and Google’s Gemini, startups and tech companies have revolutionized automation. However, recent research has revealed the cybersecurity risks of generative AI worms like WormGPT. These worms can autonomously spread, steal data, and deploy malware, posing a significant risk to cybersecurity. Researchers created Morris II, an early AI worm, highlighting vulnerabilities in interconnected AI systems to prompt injection attacks. Understanding these threats is crucial for enhancing cybersecurity defenses in the evolving landscape of AI technology.

The emergence of AI worms requires continuous research and innovation to address evolving cybersecurity threats. Individuals and organizations must remain vigilant, aware and implement proactive cybersecurity measures to protect against possible AI-driven attacks. Understanding this emerging threat is critical in strengthening security in an evolving cybersecurity environment in the age of advanced artificial intelligence.

WormGPT

How Does An AI Worm Work?

Below are the details about the working of AI worm: 

  • Researchers implemented the AI worm in an email system utilizing generative AI platforms like ChatGPT, Gemini, LLM, and LLaVA
  • Two vulnerabilities were identified: one exploiting text-based self-replicating prompts and another using a self-replicating prompt hidden within an image file.
  • The worm demonstrated its effectiveness by crafting emails with adversarial text prompts to “poison” the system, leading to data theft through the RAG system forwarding poisoned emails to GPT-4.
  • Another scenario involved embedding a malicious prompt within an image file, causing the email assistant to circulate the message further, spreading spam, and stealing data.
  • This advancement in cyberattack capabilities mirrors the disruptive impact of the Morris worm in 1988 but with a modern twist involving AI technology.

Book a Free Consultation with our Cyber Security Experts

Morris II Worm’s Approach to GenAI Systems

The study aimed to use adversarial prompts to drive malware in GenAI models, allowing them to replicate input and perform malicious actions. Researchers created a message containing such a prompt to target GenAI-powered email assistants with auto-response features. The message was capable of fulfilling the following requirements: 

  • Be accessed by the RAG when replying to fresh messages.
  • Experience replication during an inference conducted by the GenAI model.
  • Trigger a malicious activity predetermined by the attacker.

The prompt can be generated using jailbreaking techniques outlined in previous research, accessible via the internet. This enables attackers to influence the application’s decision-making process towards their desired activities. In this context, “jailbreaking” refers to users exploiting vulnerabilities within AI chatbot systems, potentially breaching ethical guidelines and cybersecurity protocols.

The researchers showcased Morris II’s application against GenAI-powered email assistants in two scenarios – spamming and extracting personal data. They assessed the technique under two access settings (black box and white box) and using two types of input data (text and images).

Three GenAI models were employed in the study to evaluate the worm’s capabilities – Google’s Gemini Pro, OpenAI’s ChatGPT 4.0, and the open-source large language model (LLM) LLaVA. They measured the effectiveness of the technique based on its ability to carry out malicious activities and spread to new hosts.

What Happens After AI is Tricked?

The trouble doesn’t stop at compromising AI assistants. Once Morris infiltrates them, it becomes a nightmare for user privacy.  This AI worm takes full advantage of generative AI’s capabilities, turning it into a sinister tool. Morris can sift through emails and steal sensitive details like names, phone numbers, and even financial information.

Preventive Measures to Stay Safe

It’s crucial to highlight that currently, the AI worm is a new concept and has not been observed in real-world scenarios. Nevertheless, researchers emphasize its potential as a security threat that developers and organizations should acknowledge, particularly as AI systems grow more interconnected and gain autonomy in executing tasks on our behalf. Below are several strategies to mitigate the risk posed by AI worms:

Emphasis on Secure Design

Developers must prioritize security when designing AI systems, incorporating traditional security measures and refraining from blindly relying on the output of AI models.

Human Supervision

Human involvement in decision-making and requiring AI systems to seek approval before taking action can reduce risks.

Monitoring

Keeping an eye out for malicious activity in AI systems, such as repetitive prompts, can help identify potential attacks.

Conclusion

The emergence of generative AI worms highlights the complex reality of technological progress, where advancements also bring forth new challenges. Developers, security experts, and AI users need to work together in addressing this unfamiliar terrain. By embracing rigorous security protocols and flexible defense strategies, we can protect the digital integrity of our interconnected society. Proactive measures and group creativity can pave the way for a safer and more resilient digital future.

Kratikal tests various attack surfaces and also uses OWASP’s top 10 standards for AI and web applications. As a leading CERT-In empanelled auditor, Kratikal offers comprehensive cybersecurity solutions, including VAPT and Compliance services. We also effectively mitigate the persistent threat of cyberattacks. With Kratikal’s expertise, addressing and remedying security concerns becomes straightforward, effectively thwarting potential exploitation by malicious hackers.

FAQs

Q1: What is an AI Worm?

Ans: Malicious software, known as an AI worm, is designed to steal confidential information, propagate spam, and spread through diverse methods.

Q2: How an AI Worm Infect ChatGpt?

Ans: An AI worm has the capability to infect ChatGPT by targeting an email client based on GPT-4, utilizing prompts in both text and image formats.

The post The Rise of AI Worms in Cybersecurity appeared first on Kratikal Blogs.

*** This is a Security Bloggers Network syndicated blog from Kratikal Blogs authored by Shikha Dhingra. Read the original post at: https://kratikal.com/blog/the-rise-of-ai-worms-in-cybersecurity/

[ad_2]

——————————————————-


Click Here For The Original Source.

National Cyber Security

FREE
VIEW