Navy pushing for AI to bolster authentication in a zero trust environment | #hacking | #cybersecurity | #infosec | #comptia | #pentest | #ransomware


Navy pushing for AI to bolster authentication in a zero trust environment

AI agents can create behavioral profiles that can more accurately authenticate not just a person or a device, but authenticate a person to a specific device.

It’s perhaps no surprise that two of the biggest buzzwords in cybersecurity — zero trust and artificial intelligence — are coming together. David Voelker, zero trust lead at the Department of the Navy, said he’s been pushing for an agentic threat detection framework as the next stage of the Navy’s zero trust transformation. He said AI has the potential to enhance user and entity behavioral analytics for better authentication.

“The MITRE [Adversarial Tactics, Techniques, and Common Knowledge (ATT&CK)] framework will provide recommended detections, recommended mitigations,” Voelker said on Federal Monthly Insights — Securing mobile collaboration. “And those detections and mitigations can be reduced into an artificial neural network to provide a probability of whether or not we’ve detected some adversarial threats specific to the technologies that they’re implementing in their environment. Being able to optimize that data point on a probability that we can report back to a [security operations center (SOC)] member to give them a definitive yes or no, based on a probability that we have something that we need to pay attention to.”

That’s part of an effort to put more emphasis on the ability to bind the authentication of the user to the authentication of the device. Voelker said it’s difficult to determine whether the person who was issued the authentication token and is bound to the device is the actual person who is moving in a cyber environment. And it’s especially important when a user may be swapping between devices, such as starting out on a mobile device in the field, and then moving to a laptop or desktop in the office.

Monitoring that user’s behavior over time can create behavioral patterns, both for an individual and for a business unit, making that authentication more difficult to spoof; an AI agent can flag deviations. For example, is a person from finance trying to access the engineering environment, or vice versa? That’s something the SOC needs to know about and investigate further.

At that point, there are countermeasures that can be deployed, both automated and human-initiated. On the automated side, the system may force the individual to re-authenticate. At the same time, the SOC may contact the individual’s supervisor to ask for more context. It could be something as simple as an employee working on a new project with data they’ve never had access to before. Or it could be a bad actor attempting to move laterally within a network.

Prioritizing micro-segmentation

“So as people come into the network, having that level of control of your data pathway and those things you need to protect is paramount,” Voelker told the Federal Drive with Terry Gerton. “So the first thing I would recommend for anyone implementing zero trust: Identify those things that you need to protect, implement micro-segmentation right away, and implement attribute-based access control.”

Volcker said the first thing agencies should think about as they’re implementing attribute-based access control is, what are the most critical things to protect? He said any agency likely has a database filled with sensitive data that it needs to protect. But any office building also likely has operational technology, like water, electricity and fire suppression systems that are controlled by IT systems. Those systems are often overlooked, and therefore present an easier opportunity for an adversary to breach and move laterally.

Copyright
© 2025 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.



——————————————————-


Click Here For The Original Source.

National Cyber Security

FREE
VIEW