The rise of GhostGPT – Why cybercriminals are turning to generative AI | #cybercrime | #infosec


While many businesses are still trying to understand how to use generative artificial intelligence (AI) to drive productivity and efficiency, malicious actors have moved rapidly. Their approach is not theoretical; it’s increasingly practical and dangerously effective. One of the clearest examples of this shift is GhostGPT, an AI-powered chatbot that was discovered in late 2024 and is already reshaping the cyber threat landscape.

GhostGPT is not a general-purpose AI tool. It has been explicitly developed, or more likely, repurposed for criminal activity. Unlike public-facing large language models (LLMs) such as ChatGPT, which are constrained by security safeguards and ethical restrictions, GhostGPT operates free from such boundaries. It is widely believed to be a “wrapper” around a jailbroken LLM or an open-source model that has had its safety features stripped out. This enables it to respond freely to prompts for malware, phishing content, and attack strategies, effectively putting offensive cyber capabilities in the hands of anyone with a web browser and an illicit link.



Source link

——————————————————–


Click Here For The Original Source.

.........................

National Cyber Security

FREE
VIEW