Hackers use AI to bypass biometrics security | #hacking | #cybersecurity | #infosec | #comptia | #pentest | #hacker

[ad_1]

The advent of touch ID on smartphones a decade ago catalysed the emergence of biometrics as a mainstream identity technology.

Hackers are hard at work using advances in artificial intelligence (AI) and other technologies to overcome the biometric defences increasingly being deployed by banks.

This is according to law firm Becker McKenzie, which asked three experts about the use of biometrics in finance: Elizabeth Roper, partner at Baker McKenzie; Chris Allgrove, director and co-founder of Ingenium Biometrics Laboratories; and Helen Vigue, identity and data director and GSMA.

Becker McKenzie comments that the use of biometrics by governments remains a source of public anxiety but, according the experts, consumers have come to accept its use by financial institutions for the account security it provides.

It notes the advent of touch ID on smartphones a decade ago catalysed the emergence of biometrics as a mainstream identity technology.

As with digital innovation generally, in banking it has been digital-first challengers that have led the way with their use in customer authentication, says the firm.

Today, it adds, biometrics are an everyday tool of authentication used by incumbent and challenger banks alike and are particularly favoured in customer onboarding.

Multiple layers of protection

“Most banks, for example, have integrated biometric authentication to one degree or another into their know-your-customer and anti-money-laundering processes,” says Allgrove.

According to Allgrove, banks can have greater assurance over biometrics’ accuracy for these purposes when the tool resides on their own secure server rather than on the user’s device.

Financial institutions typically use biometrics in tandem with other means of fraud prevention – some traditional, such as passwords, PINs and memorable questions, and some digital, such as those provided by mobile network operators (MNOs), Becker McKenzie says.

“Banks employ multiple layers of fraud protection, of which biometrics are one important one,” says GSMA’s Vigue.

She points out that MNO services provide another critical layer of protection in the forms of number verification and SIM swap – the latter a method of protecting against account takeover when hackers gain control of bank customers’ mobile SIM cards. “They all feed into institutions’ overall risk engine.”

Notwithstanding lingering questions about the accuracy of biometric tools or about how thoroughly banks test them before deploying, Allgrove deems their contributions to stronger fraud prevention to have been substantial.

Roper agrees: “Account takeover, for example, remains a huge concern for the financial industry, but biometrics are without question making it more difficult to commit that type of fraud.”

Therefore, Becker McKenzie says bank security chiefs and biometrics providers cannot rest on their laurels.

Continuing advances in technology, particularly AI, are providing criminals with a seemingly endless flow of opportunities to defeat biometric defences.

“With the growing availability of AI tools, biometrics hacking – stealing someone’s biometric information and using it to impersonate them – will become easier,” says Roper.

She points out that breaches of biometric identifier databases are rare but not unheard of. “Right now, biometric hacking is probably cost-prohibitive for most hackers, but well-financed criminal gangs are certain to be exploring how they can use AI in tandem with other methods to develop that capability.”

Allgrove’s biggest concern about AI relates to deepfakes – image, video or voice files created using the stolen biometric details of others.

“Injection attacks can be sophisticated, but they have the potential to be a game-changer,” Allgrove says.

From a local perspective, Lance Fanaroff, chief strategy officer of biometrics firm iiDENTIFii, says cyber criminals are increasingly weaponising generative AI tools to spoof human images and voiceprints used for identification at scale.

“These tools are growing more and more accessible, with professional criminals strengthening deepfake and bot attacks, and amateurs sourcing fraud-as-a-service kits on the dark web,” says Fanaroff.

“Essentially, AI is able to define and replicate one of the strongest defences companies have had against crime in the past: authenticity. This leaves us with the question: how do we get one step ahead in how we discern a genuine human interaction from a fraudulent one?”

According to Fanaroff, 4D liveness facial biometrics is a safe and foolproof solution to this challenge, but there are additional, complementary solutions entering the market.

For example, he says, biometric systems that authenticate users with real-time data, such as typing patterns, gait analysis or voice recognition could offer enhanced security that doesn’t require consumers to conduct any authentication actions.

“Take the case of behavioural biometrics, which analyses a user’s digital, physical and cognitive behaviour to distinguish whether the person behind the screen is a legitimate customer.

“A genuine user, for example, enters information in a particular manner. This pattern in human activity is increasingly being identified and analysed through machine learning, aiding in discerning whether an online activity is being driven by a human or part of an automated attack.”

[ad_2]

——————————————————–


Click Here For The Original Story From This Source.

National Cyber Security

FREE
VIEW