(844) 627-8267
(844) 627-8267

Align liveness detection with cybersecurity best practices to stop generative AI | #hacking | #cybersecurity | #infosec | #comptia | #pentest | #ransomware

Generative AI is being used to fuel a range of fraud attacks, from fake kidnappings to digital injection attacks against biometric systems. To iProov Founder and CEO Andrew Bud, the implications are far-reaching and incredibly serious, but he is not surprised.

“It was very clear that this was coming. And if it’s unchecked, I think it presents a threat to the stability of society,” Bud told Biometric Update in an interview.

Social interactions and transactions occur remotely with ever-more frequency, and are only possible given a sufficient amount of trust. Recent developments in AI threaten that trust.

“In the end this is going to come down to biometrics and trust in biometrics,” Bud says.

Trust in biometrics comes from liveness assurance, which includes presentation attack detection (PAD), but also the ability to defend against injection attacks, which Bud calls “the real threat.” It was Bud who revealed last October that injection attacks are already more common than presentation attacks.

The challenge with deepfakes is that they can be used to attack video selfie systems that are relatively resilient against traditional presentation attacks, via injection with camera emulators.

“Most industry players who claim to defend against deepfakes today actually rely upon stopping it at the source, upon detecting it in the device,” according to Bud.

This is insufficient, he says, because the relatively primitive injection attacks seen until recently could be spotted by legacy technology and by people. Generative AI gets ten times harder every year, Bud estimates, and iProov has observed successful attacks that evaded on-device detection methods by “endpoints that look like mobile devices but aren’t.”

“A fundamentally different approach”

Injection attacks and generative AI represent “a different kind of problem from the liveness problem, the presentation attack problem,” Bud says.

These attacks must be detected “in the cloud, based on information in the biometrics, and to do so in a dynamic and continually adaptive way.”

But what technology is capable of identifying these attacks?

“You need a fundamentally different approach to solving a moving-target problem like that than  a stationary target problem like good old presentation attacks,” Bud says.

That approach is based on continuous monitoring and adaptation, more like the dominant paradigm in the broader cybersecurity space.

Bud argues that “almost nobody apart from iProov actually has any visibility of real attacks.” He attributes this advantage to the combination of cloud processing and the company’s Security Operations Centre, launched in 2020.

“We and we alone process every liveness transaction in the cloud, and monitor and manage them through active threat intelligence,” Bud states. “We’re the only operators in the world with a biometric security operations center. We’re the only company that analyzes every transaction attempt done anywhere in the world on any of our customers to look for signals of attack and novel mode attacks.”

Building the system, which “captures everything in the world,” was difficult and expensive, Bud says, perhaps explaining why the move has not been replicated.

Vendors who sell software to run on-premises have “no visibility whatsoever of the evolving scenario,” Bud argues. “There is no knowledge of when and how exploits are being found, and there is absolutely no mechanism for feeding back information about those exploits into adaptations in the software.” Further, he believes software sold to run on-prem inevitably makes its way into attackers’ hands.

He refers to those kinds of biometric solutions as “zero-day exploit machines.”

Analysis of attacks in the wild is used to inform the evolution of iProov’s systems and technology, according to Bud.

“Only a system that does this can be trusted, because in the world of generative AI, we’re up against immense resources available to the attackers – many of them are nation states – with world-class skills and technology.”

The volume of monitoring iProov carries out gives it “an informational advantage” or “information asymmetry,” Bud says. “Every time they attack us, we learn more about them then they learn about us. We see everything about their attack.”

The bottom line for Bud is that the common perception of technologies as “the vulnerable and the invulnerable” is “completely untrue.”

“The question is: How does the system respond to those exploits?” he explains.

Fraud risk and modality

Bud pushes back against another common perception about biometrics, arguing that credential compromise consists of something secret becoming public.

“No biometric should be considered secret” he says. Biometric security does not depend on secrecy, just genuineness. Therefore all biometric security depends on the ability to detect forgery, he reasons. Those who provide that security have to find buried signals of falseness within data.

This gives face an inherent advantage over other biometric modalities, Bud says.

His explanation draws on time spent in voice communications prior to founding iProov.

Facial images have megabytes of data, while voice is transmitted using the GSM half rate codec, which is 12.5 kbps. Stakeholders can only demand more data by excluding those with low-bandwidth handsets or low-bandwidth networks.

That rate was chosen “based on modelling of human vocal tract,” and according to Bud, “generative AI doesn’t struggle much with simulating that.”

“The fundamental problem you’ve got is that is not enough to find the increasingly weak signals of falseness,” due to increasing generative AI capabilities, Bud explains.

Security will increasingly have to rely on one-time biometrics, he asserts.

And effective, adaptable, responsive, actively threat-monitored liveness is fundamental to sustaining trust in the context of rapidly developing generative AI.

Assessing process, rather than tech

This also makes assessment becomes a different kind of challenge.

“What we’re seeing at the moment is: the only way in which you as a buyer can trust that you are buying a reliable liveness solution that is robust against injection attacks is to look at who has already reviewed and tested and bought the technology who has the resources themselves to independently test it.”

Bud gives the U.S. federal government as an example. iProov went through a 3-month “federal intelligence contractor red team exercise” while being considered as a federal technology supplier. A team of 8 people were dedicated to assessing iProov’s vulnerability to deepfakes during this process, Bud says.

Private companies can’t afford to do that in most cases, though ING, iProov’s first customer, hired “national security level pentester” Outflank to test its liveness for 2 months as well.

These assessments, according to Bud, allow customers to answer the critical question: “What is the vendor doing to detect, respond to and resist such attacks?”

There is no certification or standard yet established for this kind of defense. Bud says customers are starting to recognize that they need to be able to answer that question, rather than possess a certain technology.

“Cybersecurity certifications in general don’t look at the particular technology, they look at the business system that assures resilience,” Bud points out. “I think that’s the way it’s going to go.”

Ultimately, he sees biometric liveness as a bedrock for societal trust, so long as it is robust against changing threats. “It needs to be a business system sized and calibrated to the dynamic nature and the scale of the threat.”

He believes no iProov competitor has the kind of one-time biometrics and active threat intelligence it has, but more importantly, that customer need to look beyond the marketing when choosing a vendor for biometric security.

He says the narrative of stopping injection attacks at the source is “pernicious,” and even puts systems at risk of being used for money laundering by sanctioned regimes looking to fund weapons programs.

This kind of threat removes the problem from the domain of confidence, where compliance theatre can suffice, Bud says. He urges stakeholders to “please take the problem seriously.”

Article Topics

biometric liveness detection  |  deepfakes  |  face biometrics  |  fraud prevention  |  generative AI  |  injection attacks  |  iProov  |  presentation attack detection


Click Here For The Original Source.

National Cyber Security