(844) 627-8267 | Info@NationalCyberSecurity
(844) 627-8267 | Info@NationalCyberSecurity

HC3 warns about AI’s cybersecurity impacts | #hacking | #cybersecurity | #infosec | #comptia | #pentest | #ransomware

The U.S. Health and Human Services, in coordination with its Health Sector Cybersecurity Coordination Center, published a briefing this month focused on artificial intelligence, cybersecurity and the healthcare industry. 

The goal of the alert – which shows some of the ways generative AI and large language models are helping cyber bad actors hone their craft – is to give hospitals, health systems and other healthcare organizations some tools and insights to stay secure despite an increase in AI-enhanced cybersecurity threats. 


OpenAI’s generative pre-trained transformer model, ChatGPT, and other LLMs like it, pose a number of risks for cybersecurity, including phishing attacks, rapid compromise of vulnerabilities, automated attacks, more evasive ransomware and complex malware, according to HC3 in its latest threat briefing, released July 13.

The agency said that one phishing email template it created with ChatGPT “appears to be delivering good news” and includes correct grammar and sentence structure in order to “entice the recipient to open the attachment with positive news.”

After producing the email, the attacker would need to attach a malicious file, and then fill in the blanks “to make it even more believable.”

In other examples, the bad actor would need to insert a malicious link, add customization and send it out.

The agency also shares a portion of a hypothetical BlackMamba proof-of-concept research by the company Hyas that shows how a user can ask ChatGPT to create a program in Python 3 that captures and exports keystrokes. 

The briefing also highlights an example of malware code that leverages Microsoft Teams for data exfiltration and developer tools to infiltrate networks.

Developers that leverage malicious packages for the software platforms they are developing unknowingly give attackers access to any system running them, said HC3.

“Ideally, risk management efforts start with the plan and design function in the application context and are performed throughout the AI system lifecycle.”

The agency recommends penetration testing, automated threat detection, continuous monitoring, cyber threat analysis and incident handling and AI training for cybersecurity personnel.

“AI-educated users and AI-enhanced systems can better detect AI-enhanced phishing attempts,” HC3 noted.

While ensuring cybersecurity in AI is a game of “cat-and-mouse,” ChatGPT and LLMs are helping with cyber education and prevention. 

AI and LLM models can be used to enhance cybersecurity measures, such as scanning emails to prevent cyber attacks and automating tasks for security teams. 

Bespoke AI models “essentially arm every organization with their very own frontline AI defender,” wrote Dhananjay Sampath, co-founder and CEO of Armorblox, a cybersecurity defense company now part of Cisco, in April.

It’s a “home field advantage” because the LLMs learn to understand “what is and is not normal behavior for their organization and fine tune themselves accordingly,” he said in a post on the company’s website.


Earlier this year, MITRE and Microsoft released a taxonomy to defend against machine learning cyber attacks. 

In the briefing, HC3 references the ATLAS framework and the National Institute of Standards and Technology Artificial Intelligence Risk Management Framework tools to help healthcare organizations manage the heightened risks of AI-enabled cybersecurity exploits.

“As the world looks to AI to positively change how organizations operate, it’s critical that steps are taken to help ensure the security of those AI and machine learning models that will empower the workforce to do more with less of a strain on time, budget and resources,” said Ram Shankar Siva Kumar, principal program manager for AI security at Microsoft, in a statement announcing the release. 


“The dark web contains many examples of discussions of the use of ChatGPT and other AI technologies to create malware and launch cyberattacks,” HC3 said in the briefing. “As AI capabilities enhance offensive efforts, they’ll do the same for defense; staying on top of the latest capabilities will be crucial.” 

Andrea Fox is senior editor of Healthcare IT News.
Email: afox@himss.org

Healthcare IT News is a HIMSS Media publication.


Click Here For The Original Source.

National Cyber Security