Crims are taking advantage of AI to sharpen old scams. The FBI reported Monday that cybercrime losses hit a record $20.87 billion in 2025, with help from bots.
According to the annual Internet Crime Complaint Center’s (IC3) Internet Crime Report [PDF], the total number of cybercrime complaints submitted to the agency topped one million for the first time, increasing 17 percent compared to 2024. Last year also marked the first time total reported cybercrime losses surpassed $20 billion, the FBI noted in the introduction to the report.
As for what’s being complained about, phishing led the pack with 191,561 reports, followed by extortion and investment scams. As for where the big money went, investment scams led the pack with $8.6 billion in reported losses, followed by business email compromise (BEC) and tech support scams.
All three of the top money makers fall under the FBI’s cyber-enabled fraud category, which includes any use of the internet or other tech to commit classic fraud scams. Cyber-enabled fraud was involved in 45 percent of 2025’s complaints, but 85 percent of financial losses.
In other words, cybercrime continues to mostly be about using the internet to extend the reach of a classic con, with classic “hacking” a minority of reported cybercrime incidents. Of the classic cyber threats reported to the FBI last year, data breaches and ransomware accounted for 75 percent.
AI enters the threat list
It’s no surprise that 2025 also marked the first year in the IC3 report’s history that a special section on artificial intelligence has been included: AI has been hailed as a profitable innovation for the online criminal underground multiple times over the past year. Interpol even reported last month that financial fraud schemes aided by AI tend to be 4.5 times more profitable than those perpetuated without the help of a bot.
Per the IC3, 22,364 reports lodged last year involved AI, with more than $893 million in losses attributed to those reports. That’s a drop in the bucket compared with the nearly $21 billion in total losses reported to the IC3 last year, though the FBI noted in the report, and in an email to The Register, that the number may be higher than reported.
“AI-related complaints are determined via the complainants’ statements and keywords they may use throughout the complaint,” the FBI told us, because most complainants don’t realize AI is involved in their issue. “AI-related counts are dependent upon the quality and wording of information provided by the complainant; therefore, it is possible the number could be higher.”
Nonetheless, what’s actually interesting about the AI section is the types of AI-related cybercrime the FBI chose to highlight alongside three years of records from the IC3 report. In the report, the FBI noted BEC, confidence/romance scams, employment lures, and investment cons as four ways AI is commonly used in cybercrime, generally by “deploying fake social profiles, voice clones, identification documents, and believable videos depicting public figures or loved ones,” the Bureau said in a press release.
With the exception of BEC, which has been reported at generally the same rate in the past three years, all of the types of crime the FBI mentioned have seen considerable spikes in complaint counts to the IC3 from 2023 to 2025. All four, meanwhile, showed considerable increases in total reported financial losses during the same three-year period.
As for AI references in complaints, BEC was surprisingly underrepresented despite being one of the highest in total reported financial losses, suggesting AI-powered BEC attacks are where the money’s at for scammers.
One area that merited surprisingly little mention, and none in relation to AI at all, despite its meteoric rise in complaints over the past three years, is government impersonation scams. Defined as any instance in which a criminal impersonates a government official to extort money from a victim, government impersonation reports showed the greatest rise between 2023 and 2025, going from 14,190 reports in 2023 to 32,424 last year – a 128 percent increase in three years.
As the FBI itself warned last May, AI-generated voice messages and text messages were being used to impersonate senior US officials in a campaign targeting current and former senior federal and state government officials, along with their contacts, to gain access to personal accounts. It was enough of a concern for the bureau to warn that messages claiming to come from senior US officials should not be assumed to be authentic.
“It has never been more important to be diligent with your cybersecurity, social media footprint, and electronic interactions,” FBI criminal and cyber branch operations director Jose Perez said of this year’s findings. “Cyber threats and cyber-enabled crime will continue to evolve as the world embraces emerging technologies such as artificial intelligence.” ®
Click Here For The Original Source.
