Experian’s Christine Foster on the AI-powered cybercrime threat | #cybercrime | #infosec


The rise of generative AI has made it easier than ever before for cybercriminals to launch convincing scams, forge deepfake identities, and evade detection. Experian’s latest UK Fraud and FinCrime report shows just how quickly the threat is escalating, with AI-related fraud surging from 23% in 2024 to 35% in early 2025.

British businesses have not stood idle in the face of this new and insidious threat. They’re fighting back, increasing their fraud prevention budgets and accelerating their adoption of AI and machine learning to detect and prevent attacks. But, for Christine Foster, Experian’s general manager for its GenAI Centre of Expertise, these investments count for little without being supported by high-quality data.

As AI-powered tools become standard in the criminal toolkit, she argues that companies must shore up their defences by adopting equally advanced, adaptive AI solutions, backed by the right technologies, internal expertise, and governance frameworks. In this Q&A, edited for length and clarity, she talks to Tech Monitor about the risks AI represents when it’s in the wrong hands – and what businesses can do to defend themselves effectively.

AI has democratised cybercrime, says Experian’s Christine Foster – but, she adds, businesses haven’t been left defenceless. (Photo: Experian)

How has generative AI changed the threat landscape for businesses in recent years?

What’s become really concerning is the increasing speed and scale at which cybercriminals can create the tools of their trade thanks to AI. Fraudsters have existed since before the internet, and they’ve always been fast and imaginative in how they attack, but generative AI has lowered the technical bar to entry. 

Previously, executing fraud might have required advanced technical skills—now, less-savvy actors can generate anything from phishing emails to synthetic identities at a scale and speed that can be really damaging. 

What are some of the attacks you’ve been seeing from those using AI?

AI tools (especially generative ones) are making it easier to produce convincing content. One clear example is voice cloning. Fraudsters can now replicate voices, either to impersonate someone in a position of authority or trust, or to create multiple synthetic personas. That can be used to build a false sense of authenticity and manipulate victims into acting on something that’s completely fabricated.

We’re also seeing the impact of phishing attacks. These used to be inexpertly written and inexpertly delivered. But now, attackers can craft highly polished messages that seem legitimate. Just as someone may use generative AI to improve their grammar or accomplish a task with good intentions, a fraudster can use the same tools for a malicious purpose. It’s the same underlying capability, just applied with a very different intent.

I was struck by the actual increase in fraud we’ve seen: AI-related fraud jumped from 23% to 35% in just a year (2024-2025). Encouragingly, 68% of UK businesses said they’re planning to increase their fraud prevention budgets next year, though, which is absolutely the right response.

We’re also seeing a broader adoption of AI and machine learning to help both detect and prevent fraud. More than half of the businesses surveyed are investing in improving their AI capabilities this year. It’s clear that organisations are taking the threat seriously and focusing on the right areas.

How are UK fraud teams using AI to stay ahead?

Machine learning and generative AI are invaluable for analysing massive datasets in real-time, something that’s key to staying ahead of fast-moving fraud trends. This is allowing businesses and fraud teams to move toward more proactive fraud detection: spotting patterns as they emerge, rather than reacting after the fact.

AI analysing datasets in real-time helps in two crucial ways. First, it improves accuracy, catching more fraudulent activity early, but it also allows fraud teams to concentrate on genuinely fraudulent cases, rather than spending time investigating cases that have been incorrectly flagged. Reducing these false positives means legitimate customers aren’t mistakenly pulled up.

What would you say is essential for businesses to bear in mind when starting new AI projects?

Data, data, data! Regardless of whether you’re using generative AI for fraud detection, personalisation, or traditional machine learning for process improvement, the foundation is always high-quality data. It can be easy to get caught up in the hype around new models or capabilities, but none of it works well without reliable, well-structured data.

I’ve seen firsthand how crucial it is to work with real, meaningful data. It’s something we advise our B2B customers on constantly: make sure your AI strategy is rooted in your business strategy and underpinned by sound data. It’s not flashy, but it’s fundamental.

One area that’s often overlooked is the modelling of rare events. These are, by definition, underrepresented in data, and that makes them hard to learn from. You can’t always afford to wait years to collect enough examples. That’s where synthetic data becomes really important. There are now well-established techniques to generate synthetic data, and the best models — including many of today’s advanced language models — use a combination of real and synthetic data to improve performance.

How do you see the future of AI evolving in fraud prevention?

AI will continue to evolve rapidly in the fraud space; it will become more sophisticated and more capable. The pace of innovation really won’t slow down.

For businesses, that means keeping up isn’t optional. AI-powered tools will increasingly be used by bad actors, which means matching that pace with equally advanced, adaptive AI solutions is essential to staying ahead.

Being on the front foot means investing in the right technologies, building internal expertise, and ensuring governance frameworks are in place to deploy AI responsibly and effectively.




Source link

——————————————————–


Click Here For The Original Source.

.........................

National Cyber Security

FREE
VIEW