A potentially world-changing AI cybersecurity tool — so powerful its makers will not release it widely — can link small vulnerabilities across millions of lines of code, turning unseen security gaps into massive exposures.
And no local bank, power provider or infrastructure agency has access to it to test their own systems and prepare potential defences.
Leading US artificial intelligence (AI) firm Anthropic has claimed its latest model, Claude Mythos, is too dangerous to release due to its exceptional cyber hacking capabilities.
Former government adviser Alastair MacGibbon said it did not matter if we were talking about Mythos or the next product from a lab on the frontier of AI.
Anthropic says Claude Mythos can outperform humans at some hacking and cybersecurity tasks. (Reuters: Dado Ruvic/Illustration)
The issue is that the “power and capability” of AI products to find vulnerabilities, chain them together and then write code to exploit them is improving at what he calls a “staggering” rate.
“Our society is at risk,” Mr MacGibbon, the former national cyber security adviser and eSafety commissioner, told The Business.
Currently, banks and critical infrastructure providers lack access to this powerful AI model.
But the bigger issue might be that the speed of the cycle is only increasing.
US firms granted access to prepare defences
AI giant Anthropic considers its new security-smashing software, Mythos, so powerful that it is not being released to the public.
Instead, a handful of large US companies, such as Microsoft, Apple, Cisco, and Amazon Web Services, are being given access to use it to test and bolster their systems.
Loading…
A further unknown group of 40 organisations that “build or maintain critical software infrastructure” also has access.
Anthropic has named it Project Glasswing and labelled it an “urgent attempt” to use the strength of Mythos for “defensive purposes”.
“No one organisation can solve these cybersecurity problems alone: frontier AI developers, other software companies, security researchers, open-source maintainers, and governments across the world all have essential roles to play,” the organisation said.
“The work of defending the world’s cyber infrastructure might take years; frontier AI capabilities are likely to advance substantially over just the next few months.
“For cyber defenders to come out ahead, we need to act now.”
Speed of ‘frontier AI’ to keep accelerating
Mythos is a model we know about.
The use of AI in both strengthening and seeking to damage cybersecurity is evolving rapidly.
“This technology changes on a three-to-six-week basis at the moment,” said Dimitri Vedeneev, the executive director for secure AI at security firm CyberCX.
Dimitri Vedeneev says “fighting AI with AI is the Zeitgeist of our times”. (ABC News: Patrick Stone)
He flipped the issue around: Mythos might be powerful, but it is nothing compared to what is coming.
“The AI capabilities that are currently available today are the worst that they’ll ever be.“
Sound the alarm, but don’t go to the bunker
Australian companies will get some protection from the reputed power of the Mythos problem-finding model because they run systems on products from companies like Microsoft and Amazon Web Services.
But the issue is that companies use a range of software, from different providers, often “stacked” on top of each other.
For example, a power company has different systems from a manufacturer, and these are often built and maintained by niche suppliers who will be among the last to gain access to new, powerful tools.
“As much as I love the United States, they are not there engineering for [Australian] water [suppliers], for our electricity providers, for our banking,” Mr MacGibbon said.
“They might be engineering at some of the global scale, absolutely, but that doesn’t mean the service delivery will continue to an Australian citizen.”
The worry is shared by Saeed Akhlaghpour from UQ Business School.
“Essentially, it’s not much about this specific model — Anthropic’s Mythos — but that’s the general trend around new AI models: that they are getting better and better in terms of their cyber capabilities.”
Saeed Akhlaghpour says now “is not the moment to panic or to go to bunkers or buy gold bars”. (ABC News: Nickoles Coleman)
The associate professor of information systems said that all companies would eventually have access to Mythos-like capabilities.
“They will be available to both the good actors and the bad actors. And it would be a race, basically, between the attackers and defenders.“
Dr Akhlaghpour said it was not so much about the particular model but the “direction of travel” for AI’s power.
“It is once again important that we in Australia be proactive and try to minimise our risk. And at the same time, regulate these technologies as early as possible.”
However, he is not worried about any imminent crash or mega-hacks prompted by the technology.
“I don’t think this is a moment to panic or go to bunkers or buy gold bars.“
Regulators ‘closely monitoring’ the issue
Australia’s two top financial regulators are watching developments in the field, as is the banking industry.
The major banks have been in preliminary discussions with the regulators and are monitoring the situation.
“Banks also continue to engage closely with regulators to ensure our financial system remains safe,” Australian Banking Association chief executive Simon Birmingham said.
APRA is the prudential regulator responsible for the stability of the entire financial system.
In a statement, a spokesperson said it had noted the “vulnerability identification capabilities” of the latest AI models.
“APRA is closely monitoring this development, including engaging with peer regulators, government agencies, and regulated entities to share insights and intelligence on emerging AI risks and opportunities.
“APRA, along with peer regulators and the government agencies, will continue to assess the implications of these technological advancements to ensure the ongoing safety and resilience of the financial system.”
In a statement, corporate regulator Australian Securities and Investments Commission (ASIC) said all participants in the market “have a duty to balance innovation with the responsible and ethical use of emerging technologies”.
“While new technologies can have significant benefits to consumers, they have also led to cyber risks escalating in both scale and sophistication.
“ASIC expects financial services licensees to be on the front foot every day to ensure that their customers and clients aren’t put at risk by inadequate controls.
“Current Australian financial services licensee obligations, consumer protection laws and director duties are technology neutral.
“ASIC expects that licensees ensure that their use of AI does not breach any of these provisions.”
A spokesperson for Home Affairs Minister Tony Burke said the government took protection of critical infrastructure extremely seriously, “which is why we’re working with software providers and companies like Anthropic to make sure we are aware of emerging vulnerabilities”.
In recent weeks, the government has signed an agreement with Anthropic to work together on the progress and safety of AI.
“Much of critical infrastructure relies on a digital backbone provided by a handful of software providers,”
a government spokesperson said.
“Project Glasswing is all about equipping these providers with the right tools to build more secure software and protect our infrastructure.”
However, that does not satisfy the former eSafety commissioner.
Alastair MacGibbon says you don’t need to find harm in the whole software stack to create huge problems. (ABC News: Scott Preston)
Mr MacGibbon has called on the federal government to get infrastructure providers, AI companies and security firms together in a room, worried Australia is falling behind.
“We’ve been building higher castle walls and digging deeper moats,” he said.
“While someone has not just invented gunpowder, but jumped to field artillery and is now moving to hypersonic missiles.
“In the time that we’re wondering whether we need to dig a deeper moat.”
Click Here For The Original Source
