Anthropic’s AI found thousands of flaws in every major OS and browser — and cybercrime losses just hit $21 billion | #cybercrime | #infosec


As technology grows in power, cybercrime becomes a bigger and costlier problem — especially as agentic AI becomes more and more capable.

Last year, Anthropic said that hackers used its Claude code tool (1) to attempt to infiltrate around 30 targets, including financial institutions and government agencies. Anthropic says this was the first time a large-scale cyberattack was carried out “without substantial human intervention.”

On April 7, Anthropic announced an initiative to help protect against bad actors: Project Glasswing (2). Anthropic says that Project Glasswing was formed because its new Claude model, Claude Mythos, “has already found thousands of high-severity vulnerabilities, including some in every major operating system and web browser.”

Anthropic says that it and its Project Glasswing Partners, which include Apple, Google, JPMorganChase, and Microsoft, hope to use Claude Mythos to identify these vulnerabilities and fix them before other AI models can exploit them. But the model could also be used to exploit the same vulnerabilities Anthropic is trying to shore up, causing heavy economic losses.

Here’s what to know about how AI’s role in cybercrime, as well as the impact it can have on the world economy.

According to the 2025 Internet Crime Report from the Federal Bureau of Investigation (3), yearly cybercrime complaints have gone up from a little under 800,000 to over 1,000,000 since 2020. In that time, yearly losses from cybercrime have gone from $4.2 billion to almost $21 billion, quintupling in the span of five years.

Those numbers only represent the direct losses reported to the FBI. Total costs to businesses are likely orders of magnitude higher, although the World Bank says it’s difficult to give accurate estimates (4) thanks to the indirect costs of cybercrime.

In 2025, the FBI first started tracking how many of its complaints mentioned AI. At the time, about 2% of complaints were AI-related (3). Those AI-related complaints accounted for around 4% of reported losses — almost $900 million.

Those numbers could be going up in the near future. If, as Anthropic has said, AI is able to exploit OS and browser vulnerabilities that even highly experienced hackers aren’t able to find, then the barrier to entry for cybercriminals becomes much lower.

The January 2026 Global Cybersecurity Outlook (5) report from the World Economic Forum says that 94% of its survey respondents, which included C-suite executives, academics, and cybersecurity leaders, thought that AI would be the “most significant driver of change in cybersecurity in the year ahead.”

AI vulnerabilities were the top concern for CEOs of high-resilience organizations and the second highest concern for CEOs overall. By comparison, only 64% of respondent organizations had a process in place to assess the security of the AI tools they used. This was up from 37% in 2025.

Claude’s brush with high-profile cybercrime shows that even AI models with safeguards in place against cybercrime are at risk of being exploited. Companies need to be diligent both in how they incorporate AI tools in the workplace and how they protect against outside AI users hoping to take advantage of vulnerabilities.

Read More: How to apply Dave Ramsey’s 7 Baby Steps to your own life

On a smaller scale, one of the best ways to protect yourself from cyberattacks is to be careful who you give your data to — especially your financial data. Research the companies you plan to bank or invest with ahead of time to see if they have a history of data breaches; that could indicate their safety protocols aren’t as strong as they should be.

Financial services companies should also offer safety tools like multi-factor authentication for your accounts, something that the Federal Trade Commission recommends (6) as a tool to protect against cyberattacks. The FTC also recommends using strong passwords for all of your online accounts.

Don’t give out sensitive information to someone who calls you first or emails you from an unverified email address, even if it’s someone you trust. Scammers are already using AI voice cloning (7) to copy loved ones’ voices and trick you into sending money. These schemes are only going to become harder to spot as AI becomes more sophisticated.

On a larger scale, one way to prevent losing money to cyberattacks is to diversify your investments. That way, if one of the companies you invest in is hit by a major cyberattack, you won’t be hit too hard by any resulting drop in stock prices.

Join 250,000+ readers and get Moneywise’s best stories and exclusive interviews first — clear insights curated and delivered weekly. Subscribe now.

We rely only on vetted sources and credible third-party reporting. For details, see our ethics and guidelines.

Anthropic (1),(2); FBI Internet Crime Complaint Center (3); World Bank (4); World Economic Forum (5); Federal Trade Commission (6),(7)

This article originally appeared on Moneywise.com under the title: Anthropic’s AI found thousands of flaws in every major OS and browser — and cybercrime losses just hit $21 billion

This article provides information only and should not be construed as advice. It is provided without warranty of any kind.



Click Here For The Original Source.

——————————————————–

..........

.

.

National Cyber Security

FREE
VIEW