The Weaponization of AI Demands More Robust Cybersecurity Training | #hacking | #cybersecurity | #infosec | #comptia | #pentest | #ransomware

[ad_1]

AI has rapidly advanced in recent months. Large language models (LLMs) such as OpenAI’s GPT-4 have made drastic progress, responding fluently and creatively to vast range of prompts. LLMs have also demonstrated remarkable cognitive capabilities, scoring extremely well on standardized assessments and improving their performance very quickly. Companies are adopting AI on a large scale and making unprecedented investments in technology.

However, there’s a dark side to this AI revolution, particularly when it comes to cybersecurity. Consider the fact that LLMs can now produce a limitless amount of compelling text on demand–text that can drastically increase the effectiveness of phishing attacks, which rely on coercing, enticing and otherwise manipulating human beings. This means employees won’t be able to distinguish between fraudulent and legitimate messages as easily, which has the potential to cause a spike in breaches and other destructive cyberattacks.

Despite all the headlines about how AI will make human workers redundant, well-trained employees are only becoming more essential for cybersecurity. As cybercriminals increasingly use AI in social engineering attacks, cybersecurity awareness training (CSAT) has never been more critical for keeping companies safe.

The Evolution of AI-Powered Cyberattacks

Two years ago, when ChatGPT wasn’t yet a global sensation, cybersecurity researchers made a startling discovery about its predecessor, GPT-3. The researchers found that phishing emails generated by GPT-3 were significantly more likely to persuade victims to take an action than messages created by humans. While this was small study, it revealed that LLMs could produce convincing and personalized phishing emails at scale–a threat that has only become more urgent as we’ve witnessed a series of AI breakthroughs since then.

Cybercriminals are already trying to take advantage of ChatGPT–from Russian hackers attempting to bypass OpenAI’s API restrictions to discussions on the dark web about how LLMs can be used to develop malware and launch social engineering attacks. The cybersecurity implications of AI are sweeping–cybercriminals will use the technology to write malicious code, spread disinformation and attack victims in countless other ways. Although it’s tempting to frame this problem as an arms race between malicious and beneficial AI applications, it’s clear that human intelligence is vital to thwart AI-powered cyberattacks.

As these attacks become more frequent and sophisticated, cybersecurity awareness training (CSAT) will play an even larger role in protecting employees and companies. But CSAT will also have to adapt as AI drives fundamental changes in cybercriminal tactics.

How CSAT Can Keep Pace With Developments in AI

According to the 2023 Verizon Data Breach Investigations Report, almost three-quarters of breaches involve human beings. This finding has been consistent over the years, as cybercriminals have long relied on the exploitation of all-too-human traits like fear, greed and curiosity to spread malware and gain access to secure systems. AI has the potential to make social engineering more dangerous than ever by enabling cybercriminals to send more effective phishing emails and deceive victims in many other ways.

AI-powered cyberthreats have made it necessary to rethink longstanding approaches to cybersecurity awareness. For example, employees can identify phishing attacks by flagging errors such as misspellings, broken syntax and grammatical mistakes in digital communications. Cybercriminals can use LLMs to produce cleaner text that won’t contain these errors, as well as phishing messages that are customized in ways that will make employees more likely to click on corrupt links or download malware. The generative capacity of LLMs will allow cybercriminals to experiment with many strategies to see what works best.

None of this is to say employees should stop looking for mistakes in emails or other signs of phishing attacks–it’s still crucial to be aware of red flags like these. However, companies need to broaden the scope of their CSAT efforts to encompass the different forms of social engineering that will be prevalent in the AI era.

A New Form of Cybersecurity Awareness

Employees are the first line of defense against many of the most destructive cyberattacks companies face, and the weaponization of AI has made cybersecurity awareness even more important. Although many companies have already discovered that check-the-box CSAT (such as rudimentary training that takes place once or twice per year) is far from sufficient, their approach to cybersecurity training must now adapt to a cyberthreat landscape that has been permanently altered by AI.

For example, cybercriminals can use LLMs to personalize their phishing campaigns–a process that would be far too laborious and inefficient without a way to produce large quantities of high-quality text at the push of a button. Cybercriminals will be able to personalize phishing messages on the basis of a victim’s industry, role within the organization, responses and a wide range of other variables. This is why CSAT needs to be personalized as well. Training should be built around employees’ unique skills, learning styles and behavioral patterns–including psychological traits that make them vulnerable to attack, such as curiosity and obedience.

A central tenet of effective CSAT is flexibility. The cyberthreat landscape is constantly shifting, and employees need to be informed about the latest cybercriminal tools and tactics. A barrage of AI-powered cyberattacks is imminent, and the companies in the best position to fend off these attacks are the ones that are preparing their employees right now.

[ad_2]

——————————————————-


Click Here For The Original Source.

National Cyber Security

FREE
VIEW