Hackers leveraging AI: vulnerability for law firms | #hacking | #cybersecurity | #infosec | #comptia | #pentest | #hacker


By Don Nokes

Hackers are upping their game when it comes to cybersecurity, going beyond email tactics. No longer are the attacks confined to email account takeover, ransomware threats, or extra sneaky attacks gaining access via trusted third parties such as title companies and payroll vendors.

The bad actors are now using artificial intelligence to devise very clever ways to employ voice-messaging to recreate your actual voice. Even just a few seconds of video content can give scammers what they require to recreate someone’s voice.

This is one kind of social engineering hack called spoofing. Social engineering involves fooling the target via psychological manipulation resulting in human error, rather than using technical or digital system vulnerabilities. Spoofing is when hackers pretend to be someone else to perpetrate a scam.

Sadly, we’ve all heard about fraudulent requests in the news in which scammers use AI to clone kids’ voices — often from a snippet on social media — and persuade parents to pay out large sums of money for a health care emergency or kidnapping situation. These nefarious strategies, however, go beyond one’s personal life to businesses, and more specifically, to law firms.

Communicating with a trusted source

One of the key ingredients of a successful social engineering hack is establishing confidence that the request for private information or a directive to send funds is coming from a trusted, appropriate source.

Imagine you are a legal assistant, paralegal, bookkeeper or other law firm employee authorized to deal with money. You look down at your phone and you see a text coming in from your boss. You open the text message and can see the entire thread you’ve had with your boss in the past. It’s easy to believe that you are communicating with your boss.

In these situations, when a request comes in from an email address that you’re certain of, or if a text message arrives and the ID indicates a trusted source, victims tend to let their guard down and respond to the malicious request. Tools are readily available today to alter the phone number from which a call or text is coming.

Here’s an example of how the scam works using AI: Once the bad actors learn (possibly from first hacking a firm’s email) that a financial transaction is taking place, they send an AI-generated voice message to confirm where to send the funds. The fund transferer hears the familiar voice confirming the financial transfer and sends the money.

This threat applies to all law firms, but we see particular vulnerabilities for those handling corporate or residential real estate transactions involving lots of parties, including title companies, mortgage brokers, clients, realtors and more.

According to cyber security company Aura, in 2021 alone, more than $756 million was stolen in wire transfer scams that were intercepted and sent elsewhere. Law firms are well advised to be particularly vigilant when using wire transfers, especially with the uptick in AI-generated fraud.

From a cybersecurity viewpoint, we know that, sometimes, the title companies act as a clearinghouse to facilitate the transactions, and they usually have rigorous safeguards against fraudulent email. Given the current prevalence of AI-generated breaches, we expect businesses of all kinds will now expand their cybersecurity safety procedures to deal with the possibility of imposter voice messages generated by AI.

Exactly how AI emulates your voice

Hackers are now using AI and its ability to impersonate your voice to perpetuate more cyberattacks; they continue to leverage technology to create more sophisticated hacks. With AI as a tool, they are attempting fewer — but more targeted — strikes.

For example, instead of looking at your phone and being convinced that it is your boss texting you, imagine answering the phone and hearing his or her actual voice requesting some personal information, your business ID accounts, or a payment.

These AI-assisted incidents are becoming more common, and they are very effective. In fact, it takes only a mere seven- or eight-second sample of your voice to train AI to emulate your voice. Even if the sample doesn’t include any tension or anger in your voice, the system can create a message that has emotion.

Another tactic we have seen perpetrators use is sending weekly messages to a target. When they receive an “out of office” message, they will leverage that knowledge to then send the text or email — or go a step further and send an AI-generated voice-message — requesting sensitive information or perhaps a financial transaction. They will start out their message with something like, “As you know, I’m out of the office.” This approach can add another level of credibility, often resulting in the legal assistant or office manager letting their guard down and allowing the hacker to succeed.

While it’s easy to think your voice is not out there, it likely is. Lawyers and professional staff often take part in webinars, news interviews, short videos on LinkedIn or their law firm website for branding and marketing, or even — in one’s personal life — narrating vacation videos on Facebook.

Institute a code word and be paranoid

When my four kids were little, we had a family code word. We told our kids that we would never send anyone to pick them up from school or other activities without saying the code word. They were not to go with anyone unless they heard the word.

It may be time to institute a code word in your law firm. It’s a simple way to verify that a request for specific private information or funds is coming from the expected source, even if it sounds like them or their caller ID checks out.

The bad guys keep coming up with ways to hack us seemingly faster than the good guys can keep up. It’s vital to commit to both regular, mandated, firm-wide training to stay abreast of the latest hacking techniques and ongoing dialogue to keep the issue top of mind.

Basically, we tell our clients they need to continue to be a bit paranoid and look for absolute verification if anything triggers even a hint of doubt.

 

Don Nokes is president of NetCenergy, an outsourced IT service provider that works with law firms on better user productivity, smarter IT, and more secure systems.

——————————————————–


Click Here For The Original Story From This Source.

National Cyber Security

FREE
VIEW