“This creates a perfect storm for cybercriminals,” said J Stephen Kowski, Field CTO at SlashNext. “When AI models hallucinate URLs pointing to unregistered domains, attackers can simply register those exact domains and wait for victims to arrive.” He likens it to giving attackers a roadmap to future victims. “A single malicious link recommended can compromise thousands of people who would normally be more cautious.”
The findings from Netcraft research are particularly concerning as National brands, mainly in finance and fintech, were found among the hardest hit. Credit unions, regional banks, and mid-sized platforms fared worse than global giants. Smaller brands, which are less likely to appear in LLM training data, were highly hallucinated.
“LLMs don’t retrieve information, they generate it,” said Nicole Carignan, Field CISO at Darktrace. “And when users treat those outputs as fact, it opens the door for massive exploitation.” She pointed to an underlying structural flaw: models are designed to be helpful, not accurate, and unless AI responses are grounded in validated data, they will continue to invent URLs, often with dangerous consequences.
Researchers pointed out that registering all the hallucinated domains in advance, a seemingly viable solution, will not work as the variations are infinite and LLMs are always going to invent new ones, leading to slopsquatting attacks.