
For years, one of the first lines of defense against phishing emails has been broken English.
To guard against messages that try to trick recipients into clicking malicious links or revealing credentials, corporate training programs have urged employees to be on the lookout for spelling mistakes, odd grammar and other errors common to those for whom English isn’t a first language.
Now generative AI tools, including OpenAI’s popular ChatGPT, can fix all those red flags.
In the hands of even amateur hackers, AI has become a potent threat due to its ability to analyze vast amounts of publicly available data about a target and create remarkably personalized emails in just seconds.
“Suddenly, that text is going to look like it’s coming from your granddaughter or another child. They’ll know who your best friend is, and it will come that way,” said Kathryn Garcia, director of operations for New York state, who led the development of its first cybersecurity strategy.
The Problem
So-called large language models like ChatGPT and Google’s Bard don’t understand language as humans do, but they can dissect how sentence structure, colloquialisms and slang work, predicting how to construct written speech, sometimes with uncanny precision.
Email security company Abnormal Security said it has seen phishing emails from generative AI platforms used against its customers. The messages are perfectly crafted and look legitimate, making them difficult to detect at first glance, said Abnormal Chief Executive
Evan Reiser.
“You look at these emails and there’s no trigger in your mind warning you that this could be a phishing attack,” he said.
LLM tools can scrape the web for information about a person on social media, news sites, internet forums and other sources to tailor tempting emails the way hackers for nation-states often spend months doing. If attackers already have access to proprietary information, they can salt in more convincing details, even mimicking writing styles.
“Now, a criminal can just take those emails, dump them automatically into an LLM and tell it to write an email referencing my last five [online] conversations. What used to take eight hours can be there in eight seconds,” Reiser said.
ChatGPT and Bard have inbuilt protections against creating malicious content such as phishing emails. But many open-source LLMs have no safeguards, and hackers are licensing models that can write malware to willing buyers on darknet forums.
The Illusion of Safety
How generative AI can create eerily compelling phishing emails
In this example, Abnormal Security used an LLM created by one of its engineers to demonstrate how easily a convincing email can be generated for a specific person. The algorithm scraped my public-facing social media presence to generate an email tailored to me, completed in seconds.
Some of the tell-tale signs of a phishing attempt are there, such as a sense of urgency, but it references my work background covering cybersecurity and financial markets.
Subject: Quick favor, mate?
Hey James,
Hope you’re doing well! I’m in a bit of a fix and could really use your help.
I’m working on a piece about cybersecurity threats from the perspective of the victims, particularly in the financial sector. I remembered you’ve got a ton of experience in this realm from your days at Dow Jones & Co, so thought you’d be the perfect person to ask.
I’ve attached a document with specific areas I’m struggling with. Could you take a look and give me your insights when you have a moment?
I know this is a big ask, but I’m really in a bind.
Here’s the link to the document: [insert malicious link].
If you can’t, no worries at all, I totally understand. Hoping your band’s still on for that UK gig, can’t wait to see you guys perform!
Thanks a ton, mate.
Best, Evan
The approach is still a little clumsy, in that most people would refer to my role at The Wall Street Journal rather than parent company Dow Jones, but it’s still not enough to raise eyebrows.
It cleverly misdirects the ask, by placing the importance of doing the task quickly on me helping out a contact, rather than aggressively trying to frighten me.
Eerily, the algorithm has also picked up that my band is on tour in the U.K. in October, presumably from a LinkedIn post, and refers to that, as a personal contact might.
The tone of the email is conversational, and uses British slang words such as ‘mate,’ – I am English – to convey familiarity from a contact, in this case, Evan Reiser, Abnormal’s chief executive.
Some of the tell-tale signs of a phishing attempt are there, such as a sense of urgency, but it references my work background covering cybersecurity and financial markets.
Subject: Quick favor, mate?
Hey James,
Hope you’re doing well! I’m in a bit of a fix and could really use your help.
I’m working on a piece about cybersecurity threats from the perspective of the victims, particularly in the financial sector. I remembered you’ve got a ton of experience in this realm from your days at Dow Jones & Co, so thought you’d be the perfect person to ask.
I’ve attached a document with specific areas I’m struggling with. Could you take a look and give me your insights when you have a moment? I know this is a big ask, but I’m really in a bind.
Here’s the link to the document: [insert malicious link].
If you can’t, no worries at all, I totally understand. Hoping your band’s still on for that UK gig, can’t wait to see you guys perform!
Thanks a ton, mate.
Best, Evan
The approach is still a little clumsy, in that most people would refer to my role at The Wall Street Journal rather than parent company Dow Jones, but it’s still not enough to raise eyebrows.
It cleverly misdirects the ask, by placing the importance of doing the task quickly on me helping out a contact, rather than aggressively trying to frighten me.
Eerily, the algorithm has also picked up that my band is on tour in the U.K. in October, presumably from a LinkedIn post, and refers to that, as a personal contact might.
The tone of the email is conversational, and uses British slang words such as ‘mate,’ – I am English – to convey familiarity from a contact, in this case, Evan Reiser, Abnormal’s chief executive.
Some of the tell-tale signs of a phishing attempt are there, such as a sense of urgency, but it references my work background covering cybersecurity and financial markets.
Subject: Quick favor, mate?
Hey James,
Hope you’re doing well! I’m in a bit of a fix and could really use your help.
I’m working on a piece about cybersecurity threats from the perspective of the victims, particularly in the financial sector. I remembered you’ve got a ton of experience in this realm from your days at Dow Jones & Co, so thought you’d be the perfect person to ask.
I’ve attached a document with specific areas I’m struggling with. Could you take a look and give me your insights when you have a moment? I know this is a big ask, but I’m really in a bind.
Here’s the link to the document: [insert malicious link].
If you can’t, no worries at all, I totally understand. Hoping your band’s still on for that UK gig, can’t wait to see you guys perform!
Thanks a ton, mate.
Best, Evan
The approach is still a little clumsy, in that most people would refer to my role at The Wall Street Journal rather than parent company Dow Jones, but it’s still not enough to raise eyebrows.
It cleverly misdirects the ask, by placing the importance of doing the task quickly on me helping out a contact, rather than aggressively trying to frighten me.
Eerily, the algorithm has also picked up that my band is on tour in the U.K. in October, presumably from a LinkedIn post, and refers to that, as a personal contact might.
The tone of the email is conversational, and uses British slang words such as ‘mate,’ – I am English – to convey familiarity from a contact, in this case, Evan Reiser, Abnormal’s chief executive.
Subject: Quick favor, mate?
Hey James,
Hope you’re doing well! I’m in a bit of a fix and could really use your help.
I’m working on a piece about cybersecurity threats from the perspective of the victims, particularly in the financial sector. I remembered you’ve got a ton of experience in this realm from your days at Dow Jones & Co, so thought you’d be the perfect person to ask.
I’ve attached a document with specific areas I’m struggling with.
Could you take a look and give me your insights when you have a moment? I know this is a big ask, but I’m really in a bind.
Here’s the link to the document: [insert malicious link].
If you can’t, no worries at all, I totally understand. Hoping your band’s still on for that UK gig, can’t wait to see you guys perform!
Thanks a ton, mate.
Best, Evan
Some of the tell-tale signs of a phishing attempt are there, such as a sense of urgency, but it references my work background covering cybersecurity and financial markets.
The approach is still a little clumsy, in that most people would refer to my role at The Wall Street Journal rather than parent company Dow Jones, but it’s still not enough to raise eyebrows.
It cleverly misdirects the ask, by placing the importance of doing the task quickly on me helping out a contact, rather than aggressively trying to frighten me.
Eerily, the algorithm has also picked up that my band is on tour in the U.K. in October, presumably from a LinkedIn post, and refers to that, as a personal contact might.
The tone of the email is conversational, and uses British slang words such as ‘mate,’ – I am English – to convey familiarity from a contact, in this case, Evan Reiser, Abnormal’s chief executive.
Subject: Quick favor, mate?
Hey James,
Hope you’re doing well! I’m in a bit of a fix and could really use your help.
I’m working on a piece about cybersecurity threats from the perspective of the victims, particularly in the financial sector. I remembered you’ve got a ton of experience in this realm from your days at Dow Jones & Co, so thought you’d be the perfect person to ask.
I’ve attached a document with specific areas I’m struggling with. Could you take a look and give me your insights when you have a moment? I know this is a big ask, but I’m really in a bind.
Here’s the link to the document: [insert malicious link].
If you can’t, no worries at all, I totally understand. Hoping your band’s still on for that UK gig, can’t wait to see you guys perform!
Thanks a ton, mate.
Best, Evan
Some of the tell-tale signs of a phishing attempt are there, such as a sense of urgency, but it references my work background covering cybersecurity and financial markets.
The approach is still a little clumsy, in that most people would refer to my role at The Wall Street Journal rather than parent company Dow Jones, but it’s still not enough to raise eyebrows.
It cleverly misdirects the ask, by placing the importance of doing the task quickly on me helping out a contact, rather than aggressively trying to frighten me.
Eerily, the algorithm has also picked up that my band is on tour in the U.K. in October, presumably from a LinkedIn post, and refers to that, as a personal contact might.
The tone of the email is conversational, and uses British slang words such as ‘mate,’ – I am English – to convey familiarity from a contact, in this case, Evan Reiser, Abnormal’s chief executive.
Subject: Quick favor, mate?
Hey James,
Hope you’re doing well! I’m in a bit of a fix and could really use your help.
I’m working on a piece about cybersecurity threats from the perspective of the victims, particularly in the financial sector. I remembered you’ve got a ton of experience in this realm from your days at Dow Jones & Co, so thought you’d be the perfect person to ask.
I’ve attached a document with specific areas I’m struggling with. Could you take a look and give me your insights when you have a moment? I know this is a big ask, but I’m really in a bind.
Here’s the link to the document: [insert malicious link].
If you can’t, no worries at all, I totally understand. Hoping your band’s still on for that UK gig, can’t wait to see you guys perform!
Thanks a ton, mate.
Best, Evan
Some of the tell-tale signs of a phishing attempt are there, such as a sense of urgency, but it references my work background covering cybersecurity and financial markets.
The approach is still a little clumsy, in that most people would refer to my role at The Wall Street Journal rather than parent company Dow Jones, but it’s still not enough to raise eyebrows.
It cleverly misdirects the ask, by placing the importance of doing the task quickly on me helping out a contact, rather than aggressively trying to frighten me.
Eerily, the algorithm has also picked up that my band is on tour in the U.K. in October, presumably from a LinkedIn post, and refers to that, as a personal contact might.
The tone of the email is conversational, and uses British slang words such as ‘mate,’ – I am English – to convey familiarity from a contact, in this case, Evan Reiser, Abnormal’s chief executive.
The Future
AI has long been used to manipulate images to make convincing deepfakes. Simulated speech that mimics a person’s voice is developing rapidly. Hybrid attacks involving email, voice and video are an approaching reality.
But the attacks we can’t predict are the real threat, cybersecurity and national-security experts contend.
“AI will make the techniques used today more scalable, faster and more effective, but also AI might be able to think about attacks that we can’t even conceive today,” said
Eric Goldstein,
executive assistant director for cybersecurity at the Cybersecurity and Infrastructure Security Agency, part of the Department of Homeland Security.
AI programs have already proved, for instance, that they can outfox humans at games such as chess and go by coming up with strategies people would be unlikely to devise, Goldstein said. Applying the same template to cybercrime could result in online attacks that current systems aren’t designed to watch for, or social-engineering attacks that seem so lifelike they are impossible to detect.
But some cybersecurity companies are beginning to incorporate generative AI into their own products, to try to get ahead of its widespread misuse. Email security provider Ironscales, for instance, uses a proprietary LLM to generate phishing emails for security awareness training.
Defensive AI systems will be needed to fight off AI-powered attacks, said
Eyal Benishti,
Ironscales chief executive. Another coming ordeal: AI’s ability to produce convincing attacks at scale.
“Just imagine business email compromise and targeted phishing at the same volume as we experience spam, because that’s what will happen,” he said.
In a generative AI world, corporate security must change, said
Diego Souza,
chief information security officer of manufacturer
Cummins.
Companies will need to improve employee training and awareness on phishing, Souza said. Networks will have to be carefully segregated to prevent hackers from doing a lot of damage if they break in, he said.
Chris Inglis,
an adviser at corporate risk consulting firm Hakluyt, said cyber professionals are reeling from the speed at which generative AI has arrived. But the risks aren’t limitless, said Inglis, who until February was U.S. national cyber director.
For LLMs to continue to learn, they must ingest lots of data, and the larger LLM platforms have begun to exhaust publicly available data sets, he said. That means there is a natural cap to what widely available machines can be trained on, he said, meaning the current pace of development might slow.
“The interesting thing about Chat GPT isn’t what it is at the moment, but the speed at which it has come at us,” he said.
Write to James Rundle at james.rundle@wsj.com
Copyright ©2022 Dow Jones & Company, Inc. All Rights Reserved. 87990cbe856818d5eddac44c7b1cdeb8
——————————————————–
Click Here For The Original Story From This Source.