Mythos warning puts AI security paradigm shift at crossroads #AI


Security concerns are spreading over Anthropic’s artificial intelligence (AI) model ‘Claude Mythos’, fuelling discussions about a shift in the security paradigm. [Photo: Shutterstock]

[DigitalToday reporter Jin-ho Lee] Security concerns triggered by Anthropic’s artificial intelligence (AI) model ‘Claude Mythos’ are spreading across government and industry. With AI emerging that can range from vulnerability detection to generating attack code, calls are growing to reset response strategies.

Mythos was released in preview form on April 7 through the ‘Project Glasswing’ initiative to 12 big tech companies and about 40 major firms. It was reported to show superior performance in finding security vulnerabilities and carrying out attacks compared with existing AI models, including identifying a 27-year-old bug in OpenBSD, which is known for strong security.

According to the Cloud Security Alliance (CSA)’s recent ‘Mythos Ready’ report, the time from a vulnerability being discovered to being exploited in an attack has sharply decreased to 20 hours in 2026 from 2.3 years in 2018. The CSA assessed that as AI-driven attacks accelerate, humans cannot keep pace in responding.

The report said in particular, “Anthropic’s Mythos preview autonomously finds critical vulnerabilities in all major operating systems and browsers and generates attack code that operates without human intervention,” adding, “This happens faster and at larger scale than any existing technology.”

In a test in which Anthropic tried to exploit vulnerabilities in the Mozilla Foundation’s Firefox 147 JavaScript engine using its AI models Claude Opus 4.6 and Mythos, Mythos succeeded 181 times while Opus 4.6 succeeded only twice.

◆ “Urgent need for a national-level response system”… calls to redesign governance

As Mythos’ overwhelming performance in detecting vulnerabilities and breaching systems has been confirmed, the South Korean government has also moved to respond. The Ministry of Science and ICT and the Financial Services Commission held an emergency meeting to intensively discuss ways to strengthen security policies with major companies. The National AI Strategy Committee is discussing raising the ‘homegrown AI foundation model’ project to the level of strengthening security capabilities.

Experts agree the government should go beyond issuing warnings and strengthen its role as a control tower by standardising vulnerability information sharing and response procedures. They advise a broad redesign of security governance to deal with AI security threats that differ from the past.

Heung-Youl Youm (염흥열), an emeritus professor of information security at Soonchunhyang University, assessed the issue as “an incident that could serve as a kind of game changer in security”. In the past there were months to patch vulnerabilities, but that could now shrink to within a day, he said, calling for governance to be reset for faster patching.

Youm said, “The key is how quickly we can bring forward the time it takes to complete full security patching for vulnerabilities,” and stressed that “a structured response system at the national level is needed.”

Seok-jin Hwang (황석진), a professor at Dongguk University’s Graduate School of International Information Security, also stressed the need to shift the response approach itself. “Responding case by case whenever a new AI emerges has its limits,” he said. “It is important to build a national-level response architecture in advance, such as minimising access privileges, auditing usage history and an incident reporting system.”

◆ “Fighting AI with AI”… shift to automated defenses and basic checks must go together

Assessments say technical responses also inevitably require strategy changes. Beyond merely supplementing existing systems, a shift is needed to an ‘automated defense system’ that counters AI-based attacks.

Hwang said, “Going forward, we need to change the security paradigm from manual checks to automated defenses,” and stressed the need to build an automated pipeline of AI-based detection, judgement and response.

The technology gap is also emerging as a key variable. There are concerns that if high-performance AI models are provided only to certain groups, asymmetry in security capabilities could deepen. Doo-sik Yoon (윤두식), CEO of Iroun & Company, said, “If guardrail-free Mythos is provided only to certain groups, countries or organisations that cannot get inside will have no choice but to be hit.”

Yoon added, “In reality, even when a vulnerability comes out, there are many environments where (governments or institutions) cannot patch,” and pointed to a lack of OS or software updates and the absence of asset management as the biggest problems. He also stressed that the most basic response is to accurately identify and continuously manage an organisation’s IT assets and vulnerabilities.

Meanwhile, Anthropic on April 16 released Opus 4.7 with some performance reduced compared with Mythos. Anthropic said in a blog post, “Opus 4.7’s cyber security capabilities were designed not to reach the Mythos preview, and we also ran experiments in the training process to reduce those functions.” OpenAI also introduced its ‘GPT-5.4-Cyber’ model optimised for detecting security vulnerabilities. Like Anthropic, it put safety measures in place by first providing GPT-5.4-Cyber only to some experts.

In the industry, there is an atmosphere of interpreting this as a signal of ‘speed control’ that is mindful of the security risks of high-performance AI. A security industry official said, “Wouldn’t both Anthropic and OpenAI know well the ripple effects when their models are abused?” and added, “Both companies are likely agonising over the scope of releasing official versions.”



Click Here For The Original Source.

——————————————————–

..........

.

.

National Cyber Security

FREE
VIEW