How criminals are hacking your children’s social media using just three seconds of their voice and turning it into a terrifying AI scam to trick parents | #hacking | #cybersecurity | #infosec | #comptia | #pentest | #hacker

Criminals are hacking children’s social media accounts and cloning their voices using AI to trick their parents into sending them money, MailOnline can reveal.  

Even the most basic scammers are using simple AI tools online to turn just three seconds of a child’s voice into deepfake audio files with an 85 to 90 per cent match, security experts have warned.

With more effort spent on cloning, hackers can even achieve a 95 per cent voice match, research from security firm McAfee shows, leaving parents at risk of being exploited by ruthless fraudsters.

The more accurate the clone, the better chance a criminal has of preying on the vulnerability of a friend or family member and tricking them into handing over their money.

One influencer, with nearly 400,000 followers on TikTok, revealed how her mother was woken up in the middle of the night to a call from an unknown number. When she answered, it was her daughter’s voice – which had been cloned – screaming for help. 

It comes as fraud experts issued fresh warnings over the latest ‘hi mum’ texts where scammers ‘prey on our goodwill with emotive stories’ and con parents into thinking their children are in trouble.

Vonny Gamot, head of Europe, Middle East and Africa at McAfee, told MailOnline that AI is ‘becoming a catalyst’ for online fraud such as the ‘hi mum, hi dad’ scam.

She said: ‘The AI wave is going to give a second life to everything that has been around for a while already. The only thing is, we’re going to see more and those threats are going to be even more lethal because of AI.’

READ MORE: Fresh warnings over latest ‘hi mum’ text scams where fraudsters ‘prey on our goodwill with emotive stories’ and trick parents into thinking their children are in trouble 

Ms Gamot revealed how a rise in the younger generating using social media – as well as preferring sending voice notes over text messages – could be behind the rise in this type of fraud. 

Hackers are targeting children’s social media accounts, such as on Facebook, Instagram, Snapchat and TikTok to obtain samples of their voices from videos posted online or from voice messages.

Ms Gamot explained: ‘Hackers are going to sample their voice from the social media. They’re going to use no more than three seconds and they are going to create a new scam.’

She added: ‘You know this voice – and the reason why you know this voice is because this voice has been sampled. The sample is genuine but the AI has used it to create a full scam.’ 

The hackers will then call, send voicemails or voice messages with the sample to the ‘mum’ and ‘dad’ contacts saved in the children’s phones. 

Ms Gamot explained that when a mother or father then hears the voice of their child, they are likely to panic in the moment. ‘You’re losing your vigilance, you’re not calling on your common sense,’ she added. 

One woman shared a video on TikTok where she claimed that her family had fallen  victim to the terrifying AI voice scam.

‘These people are manipulating your voice, taking your voice from phone calls you’ve made, this video, and they create what they want your voice to say,’ she said.

‘At 2.26 in the morning my mum got a call from an unknown number and of course she answered it because she’s like something’s wrong, it’s 2.30am. When she answers the phone, it’s me, my voice, screaming hysterically and crying.’

The woman explained how when her mother asked what was wrong, the voice kept shouting: ‘Please don’t hurt me, please. Why are you doing this?’

She continued: ‘All of a sudden the call ended out of nowhere. She’s trying to call the number back and getting no response.’

She even revealed how the voice was using her dog’s name during the call, making the scam even more realistic.

Criminals are hacking children’s social media accounts and cloning their voices using AI to trick parents into sending them money (file image)
This woman shared how her family had fallen victim to the AI voice scam. Her mother received a phone call in the middle of the night. When she answered, it was a clone of her daughter’s voice screaming for help

It’s not just children who are being targeted. A McAfee survey found 50 per cent of adults share their voice data online at least once a week on social media or through voice notes. 

The research revealed scammers are using AI technology to clone voices and then send a fake voicemail to or call the victim’s contacts pretending to be in distress. 

How to protect yourself from AI voice cloning 

  • Create a ‘codeword’ with children, family members or trusted close friends that only they could know – and make a plan to always ask for it if they call, text or email. 
  • Always question the source: If it’s a call, text or email from an unknown sender, or even if it’s from a number you recognise, stop, pause and think. Does that really sound like them? Hang up and call the person directly. 
  • Think before you click and share – Who is in your social media network? Do you really know and trust them? Be thoughtful about the friends and connections you have online. The wider your connections and the more you share, the more risk you may be opening yourself up to having your identity cloned for malicious purposes. 
  • Identity theft protection services can help make sure your personally identifiable information is not accessible or notify you if your private information makes its way to the Dark Web. Take control of your personal data to avoid a cybercriminal being able to pose as you. 
  • Source: McAfee 

Some 65 per cent of adults were not confident that they could identify the cloned version from the real thing.

The cost of falling for an AI voice scam can be significant, with 40 per cent of people who’d lost money saying it had cost them over £1,000, while 6 per cent were duped out of between £5,000 and £15,000. 

Ms Gamot said that her company is already identifying 1.5 million plus threats per day that are AI based – with the ‘hi mum, hi dad’ scam text one major form of fraud.

‘Scammers will use AI tool that they can simply find online or on the dark web. They will sample something,’ she added.

‘In that case it could be a voice, they then use it against you, and they’re going to try to either get data out of you so that they can resell it or they’re going try to get money out of you. In both cases, you’re not in a good shape.’

Ms Gamot said that the ‘digitisation of the younger generation is becoming increasingly important’ – with children getting access to devices at a younger age and schools asking kids to do homework online.

McAfee advise victims of potential AI scams to first ‘take a breath’ and then call their friend or family member to double or even triple check that they are not in trouble.

Ms Gamot explained: ‘We are really starting to see that those threats are picking up and  we wan to make sure that we give everyone the tool to actually protect themselves while they continue to enjoy their digital life online.

‘The only thing is we want to make sure that people can do this in a safe way.

‘And before we we tried sell our solutions, we also want to share a few tips that we believe you know, doesn’t cost anything, and actually, pretty pretty efficient, and really start by again taking a big breath and and thinking twice about is it?’

Pictured above are examples of scammers posing as children to con parents into sending money. There has been a resurgence in the ‘hi mum, hi dad’ scam – which is increasing with the development of AI

McAfee also suggests setting a verbal ‘codeword’ with children or trusted close friends that only they could know when in trouble.

READ MORE: Scam text pretending to be from a kid to a parent ridiculed for its highly unlikely suggestion: ‘Yeah, sure’ 

Scam expert Nick France, of leading cybersecurity company Sectigo, also fears that these types of fraud will only increase in the next five years, especially with the growing use of AI deepfakes.

Mr France told MailOnline: ‘The ever-increasing sophistication of AI has made it possible for cybercriminals to successfully mimic the voice of another person, and successfully impersonate them for a number of attacks. 

‘People think phone scams, that successfully manipulate someone’s voice is mission impossible, but the reality is that AI deepfake voice technology is more democratised than we like to believe, it doesn’t take an MIT graduate to pull this off. 

Just last month, Sadiq Khan fell victim to a deepfake audio which mimicked him saying: ‘I don’t give a flying s*** about the Remembrance weekend.’

A McAfee survey found 50 per cent of adults share their voice data online at least once a week on social media or through voice notes. (file image)

The rise of AI voice cloning has brought a resurgence of the ‘hi mum, hi dad’ scam which first started appearing on Whatsapp in 2021. 

READ MORE: Con artists behind ‘Hi Mum, Hi Dad’ WhatsApp scam are now using text messages to target victims, fraud expert warns

It has since grown on other platforms such as text messages and now voice messages.

Data shared with MailOnline by TSB shows that ‘Friends and Family fraud’ accounts for 53 per cent of all impersonation scams, with 93 per cent originating on WhatsApp.

The average loss for the fraud victim is more than £1,600.

Matt Hepburn, fraud spokesman for TSB, said: ‘Friends and family impersonation scammers prey on our goodwill with emotive stories and pleas for urgent financial help, simply to steal money intended for someone close to us.

‘If you receive one of these texts, contact the individual directly before engaging any further, and certainly before ever making a payment – as it’s highly likely to be a scam.’

Chris Ainsley, Head of Fraud at Santander, told MailOnline: ‘It’s natural to want to immediately act on any messages from loved ones asking for help, but scammers will impersonate friends and family members to panic you into transferring money without thinking about what you’re doing.

‘Always take the time to consider what you’re being asked to do, any requests to transfer money to a new account or excuses why a person can’t speak directly on the phone, should be treated as a red flag.’


Click Here For The Original Story From This Source.

National Cyber Security