AI-Powered Scam Factories Now Bigger Than Global Drug Trade, Fintech Experts Warn | #cybercrime | #infosec


 

 

Fox noted that fraud does not begin on financial platforms.

 

“The cybercriminals are platform and institution agnostic, and it starts on non-financial institution platforms — dating apps, social media,” she said.

 

 

 

 

 

The fix: connect the dots, ditch the static rules

Both panellists were unequivocal that the industry’s current defences are inadequate — and that the solution lies not in more technology in isolation but in better-connected systems and human oversight.

 

Luhur pointed to a structural flaw that criminals are actively exploiting: siloed security infrastructure within financial institutions.

 

KYC teams, onboarding systems, authentication platforms and transaction monitoring tools frequently operate independently, with no shared data or unified command.

 

“If you can just connect the time when a customer comes in and the time when money flows out of that person’s account and connect their face, their device, and their biometrics—you’re going to be in a lot better shape,” he said. “You’re going to solve most of your problems by doing something that’s honestly relatively simple. Not easy, but simple.”

 

He was equally pointed in his criticism of legacy fraud detection tools.

 

“Most financial institutions are still on systems with engineered static rules — that’s the reality — and you need to upgrade.”

 

Fox reinforced the argument for human intervention, warning that AI, despite its strengths in pattern recognition, cannot grasp intent or adapt to context without human input.

 

She recalled an incident in which a client’s AI system incorrectly flagged thousands of legitimate account applications in Latin America as fraudulent because utility bills in the region commonly carry advertisements that the system mistook for suspicious activity.

 

The problem was only caught because customer support staff — operating in a separate silo — began receiving complaints.

 

“Having that human in the loop and making sure that your humans are talking to each other is very important,” she said.

 

 

 

Regulators must get specific

The panellists also took aim at regulatory frameworks that they argued have been too vague to be effective. Luhur cited Indonesia’s experience, where financial industry bodies had lobbied for light-touch, principles-based oversight – a position he now regards as a mistake.

 

“You can’t just say it has to be safe. You have to be pretty detailed about what ‘safe’ means — what low risk, medium risk, and high risk mean and what types of tools and standards you need to apply,” he said. “When you need the whole infrastructure to change, the regulator has got to have some teeth.”

 

 

He pointed to two more prescriptive regulatory models as examples worth emulating: the Philippines’ Anti-Financial Account Scamming Act (AASA), which mandates transaction monitoring and behavioural analytics across all financial institutions and fintechs; and the Monetary Authority of Singapore’s move to require continuous penetration testing and vulnerability assessments.

 

“Certain regulators are already being prescriptive because it’s already a massive problem, and the industry needs to move,” Luhur said.

 

 

On the question of AI-powered defensive tools, Luhur argued that institutions need not wait for bespoke solutions.

 

Running existing Large Language Models against publicly available cybersecurity frameworks, he said, could already expose a significant number of vulnerabilities in any organisation’s systems — and at a fraction of the cost and time of traditional manual penetration testing.

 

“It’s scary to find out all of the holes you have in your system — but wouldn’t you want to know the holes before something happens, as opposed to being completely blindsided?”


 



Click Here For The Original Source.

——————————————————–

..........

.

.

National Cyber Security

FREE
VIEW