Cybercrime losses have risen significantly, surpassing $20 billion, while phishing and spoofing is the dominant cyber-enabled fraud activity, reports the FBI in its annual cybercrime report.
The FBI’s Internet Crime Report 2025, compiled from the FBI Internet Crime Complaint Center (IC3), reports that losses have climbed 26 percent from 2024, to reach a total of $20.88 billion in losses. The average loss was $20,699.
The age demographic of over 60s suffered the worst, far in front with losses of $7.75 billion and 201,266 complaints. The demographic just below — the 50 to 59 group — suffered the second-most with $3.68 billion in losses and 124,820 complaints.
Combined, these two demographics (50 to 60-plus) accounted for more than half of all losses in 2025. Phishing and spoofing is the most common complaint category, with 191,561 reports. Extortion followed with 89,129 complaints.
Identity theft and impersonation are among the financially damaging schemes recorded, with the former accounting for $185.8 million in losses, while government impersonation scams resulted in $797.9 million in losses. The most damaging crime types were investment, business email compromise, tech and customer support, personal data breach, and confidence or romance scams. Investment fraud accounted for $8.65 billion in losses.
The FBI notes that “cyber‑enabled fraud” now represents nearly 85 percent of all losses reported to IC3, and 45 percent of all complaints, revealing its devastating nature. This is where criminals use the Internet or other technology to commit fraud and which involves the theft of money, data, identities, or the creation of counterfeit goods or services.
As identity‑centric attacks grow more sophisticated, the FBI is urging organizations to strengthen authentication and access controls. Recommended practices include eliminating default passwords and credentials when installing software, and requiring all accounts with password logins to comply with NIST standards.
Another recommendation to protect against ransomware is to enable multi‑factor authentication across systems such as webmail, VPNs and administrative accounts.
Voice impersonation a systemic challenge for healthcare
Jason Barr argues that the FBI’s IC3 report reveals how cybercrime has shifted. The VP of healthcare for Pindrop sees the growth in social engineering tactics, real-time deception, and AI-enabled impersonations as part of a pattern.
“Many of the highest-loss categories appear to involve some form of human interaction — conversations, not just code,” he writes on the Pindop blog.
“To me, that suggests a meaningful shift in the threat model. Security is no longer defined solely at login. It’s being tested in real time, at the moment of interaction.”
The result is that identity verification is no longer something enterprises can verify only at login, it must be continuously assessed during the interaction itself, with biometrics such as voice and behavior and with device intelligence. Continuously assessing authenticity could combat the threat of genAI and injection attacks.
Barr believes this shift has serious implications for healthcare, which relies heavily on phone‑based workflows. These voice channels are the hub of sensitive operations and lead to Protected Health Information (PHI), benefits and internal systems. But they remain some of the least protected.
Healthcare identity is also complex with patients, caregivers, providers and staff often acting on behalf of others. This complexity is exacerbated by fragmented systems, Barr argues, creating ambiguity that traditional IAM tools struggle with.
Authentication methods such as knowledge‑based questions, one‑time passwords and agent judgement have become increasingly fragile in an AI‑driven threat landscape. Synthetic voices, stolen data and automated impersonation tools now make it far easier to bypass these controls.
The pace of growth for AI-voice-cloning is such that it drew congressional scrutiny in the U.S. New Hampshire senator Maggie Hassan last week pressed four major companies to explain what they are doing to stop scammers from turning synthetic speech tools into engines of fraud. Hassan asked ElevenLabs, LOVO, Speechify and VEED for detailed answers about what they are doing to prevent bad actors from using their services.
Meanwhile, Barr notes that attackers are using AI to erode trust in the voice channel itself. Synthetic callers can convincingly mimic real people, probe authentication flows and launch targeted impersonation attempts at scale.
For healthcare, Barr concludes, the inability to verify who or what is on the other end of a call represents systemic exposure, with direct implications for PHI breaches, account takeover, fraudulent claims and downstream attacks such as ransomware.
Article Topics
AI fraud | cybersecurity | digital identity | fraud prevention | identity theft | Pindrop
Click Here For The Original Source.
