$US25 million fraud shows the risks of new technology as companies warned to prepare | #hacking | #cybersecurity | #infosec | #comptia | #pentest | #hacker

[ad_1]

Where once a phishing email might appear obvious – riddled with grammar and spelling errors – AI has allowed hackers who don’t even speak a language to send professional-sounding messages.

In a sequence seemingly out of a science fiction movie, last month Hong Kong police described how a bank employee in the city paid out $US25 million ($37.7 million) in an elaborate deepfake AI scam.

The worker, whose name and employer police refused to identify, was concerned by an email requesting a money transfer that was purportedly sent by the company’s UK-based chief financial officer, so he asked for a video conference call to verify. But even that step was insufficient, police said, because the hackers created deepfake AI versions of the man’s colleagues to fool him on the call.

“[In the] multi-person video conference, it turns out that everyone [he saw] was fake,” senior superintendent Baron Chan Shun-ching said in remarks reported by broadcasters RTHK and CNN.

How they were able to create AI versions of executives at the unnamed company to a believable standard has not been revealed.

But it isn’t the only alarming case. In one documented by The New Yorker, an American woman received a late night phone call that appeared to come from her mother-in-law, wailing “I can’t do it”.

A man then came on the line, threatening her life and demanding money. The ransom was paid; later calls to the mother-in-law revealed she was safe in bed. The scammer had used an AI clone of her voice.

Scammers have used AI-generated “deepfake” images of Matt Comyn, Commonwealth Bank CEO. 

50 million hacking attempts

But scams — whether on individuals or companies — are different to the kind of hacks that have befallen companies including Medibank and DP World.

One reason purely AI attacks remain largely undocumented is hacks involve so many different components. Companies use different IT products, and the same products typically have a great many versions. They work together in different ways. Even once hackers are inside an organisation or have duped an employee, funds have to be moved or converted into other currencies. All of that takes human work.

Even though AI-enabled deepfakes remains a risk on the horizon for now, for big companies more pedestrian AI-based tools have been used in cybersecurity defence for years. “We’ve been doing this for quite some time,” says National Australia Bank chief security officer Sandro Bucchianeri.

NAB, for example, has said it is probed 50 million times a month by hackers looking for vulnerabilities. Those “attacks” are automated and relatively trivial. But if a hacker finds a flaw in the bank’s defences, it would be serious.

Microsoft’s research has found it takes an average of 72 minutes for a hacker to go from gaining entry to a target’s computers through a malicious link to accessing corporate data. From there, it isn’t far to the consequences of major ransomware attacks such as Optus and Medibank in the last year: personal information leaked online or systems as crucial as ports stalled.

That requires banks such as NAB to rapidly get on top of potential breaches. AI tools, says Bucchianeri, help its staff do that. “If you think of a threat analyst or your cyber responder, you’re looking through hundreds of lines of logs every single day and you need to find that anomaly,” Bucchianeri says. “[AI] assists in our threat hunting capabilities that we have to find that proverbial needle in the haystack much faster.”

Mark Anderson, national security officer at Microsoft Australia, agrees that AI should be used as a shield if malicious groups are using it as a sword.

“In the past year, we’ve witnessed a huge number of technological advancements, yet this progress has been met with an equally aggressive surge in cyber threats.

“On the attackers’ side, we’re seeing AI-powered fraud attempts like voice-synthesis and deepfakes, as well as state-affiliated adversaries using AI to augment their cyber operations.

He says it is clear that AI is a tool that is equally powerful for both attackers and defenders. “We must ensure that as defenders, we exploit its full potential in the asymmetric battle that is cybersecurity.”

Beyond the AI tools, NAB’s Bucchianeri says staff should watch out for demands that don’t make sense. Banks never ask for customers’ passwords, for example. “Urgency in an email is always a red flag,” he says.

Thomas Seibold, a security executive at IT infrastructure security company Kyndryl, says similarly basic practical tips will apply for staff tackling emerging AI threats, alongside more technological solutions.

“Have your critical faculties switched on and do not take everything at face value,” Seibold says. “Do not be afraid to verify the authenticity via a company approved messaging platform.”

Mileva Security Labs founder Harriet Farlow remains optimistic about AI despite the risks.  

Even if humans start recognising the signs of AI-driven hacks, systems themselves can be vulnerable. Farlow, the AI security company founder, says the field known as “adversarial machine learning” is growing.

Though it has been overshadowed by ethical concerns about whether AI systems might be biased or take human jobs, the potential security risks are evident as AI is used in more places like self-driving cars.

“You could create a stop sign that’s specifically crafted so that the [autonomous] vehicle doesn’t recognise it and drives straight through,” says Farlow.

But despite the risks, Farlow remains an optimist. “I think it’s great,” she says. “I personally use ChatGPT all the time.” The risks, she says, can remain unrealised if companies deploy AI right.

Read more of the special report on Artificial Intelligence

[ad_2]

——————————————————–


Click Here For The Original Story From This Source.

.........................

National Cyber Security

FREE
VIEW