Arabic Arabic Chinese (Simplified) Chinese (Simplified) Dutch Dutch English English French French German German Italian Italian Portuguese Portuguese Russian Russian Spanish Spanish
| (844) 627-8267

DHS, CISA, plan AI-based cybersecurity analytics sandbox • The Register | #hacking | #cybersecurity | #infosec | #comptia | #pentest | #ransomware


Two of the US government’s leading security agencies are building a machine learning-based analytics environment to defend against rapidly evolving threats and create more resilient infrastructures for both government entities and private organizations.

The Department of Homeland Security (DHS) – in particular its Science and Technology Directorate research arm – and Cybersecurity and Infrastructure Security Agency (CISA) picture a multicloud collaborative sandbox that will become a training ground for government boffins to test analytic methods and technologies that rely heavily on artificial intelligence (AI) and machine learning techniques.

It also will include an automated machine learning “loop” through which workloads – think exporting and tuning data – will flow.

The CISA Advanced Analytics Platform for Machine Learning (CAP-M) – previously known as CyLab – will drive problem solving around cybersecurity that encompasses both on-premises and cloud environments, according to the agencies.

“Fully realized, CAP-M will feature a multi-cloud environment and multiple data structures, a logical data warehouse to facilitate access across CISA data sets, and a production-like environment to enable realistic testing of vendor solutions,” DHS and CISA wrote in a one-page description of the project. “While initially supporting cyber missions, this environment will be flexible and extensible to support data sets, tools, and collaboration for other infrastructure security missions.”

The facility will be used for continuous experimentation in a range of areas, including analyzing and correlating data to help organizations respond to the changing threat landscape. Data gathered from the experiments will be shared with others in government, academic institutions, and the private sector, they wrote. The plan includes ensuring the security of the platform itself as well as the protection of privacy.

No timeline was given for delivery of the project. That lack of specificity and the project’s broad goals drew a mix of praise and caution from some in the cybersecurity space.

Monti Knode, director of customer success at security firm Horizon3.ai, said the plan from DHS and CISA makes sense and that the investment by the agencies is overdue. The agencies need to ensure that CAP-M relieves unintentional problems caused by the rapid development of security technologies that aim to detect incidents.

“Building a lab environment to build analytics skills is critical to our foundational talent in public and private national security,” Knode told The Register. “The tuning of our security stack tooling has contributed overwhelmingly to alert fatigue over the years, leading analysts and practitioners on wild goose chases and rabbit holes, as well as real alerts that matter but are buried. As well, labs rarely replicate the complexity and noise of a live production environment, but this could be a positive step.”

Such an AI-and-machine-learning-based environment will also need a massive influx of data to learn from, he said. That could include creating an automated attacker to repeatedly run attacks to train the analytics tools, create notifications, and teach the system to recognize when an alert was incorrect, he said.

There are pros and cons to the program, according to Sami Elhini, biometrics specialist at Cerberus Sentinel. Such analysis and continuous learning are necessary, particularly for getting a broad understanding of cyberthreats at a high level. That said, some models become too generalized and don’t identify threats that affect smaller targets and are simply considered noise.

There also is the threat of a nation-state actor targeting the CAP-M platform to learn its strengths and weaknesses to develop exploits or to introduce white noise, Elhini told The Register.

“When using ML and AI to identify patterns and exposing those models to a larger audience, the probability of an exploit increases,” he said, pointing to face recognition as an example. It’s “easily accessed and tested AI/ML model. Adversaries quickly learned that by introducing noise into face images that was imperceptible to humans, they could fool face recognition systems to produce a false non-match.”

CAP-M is only the latest step taken by a Biden Administration that has been pushing for two years to shore up the country’s cyber-defenses.

“Like the space race between the US and Soviet Union during the Cold War, the government can play a key role in advancing technological innovation,” Craig Lurey, co-founder and CTO of Keeper Security, told The Register. “Research and development projects within the federal government can help support and catalyze disparate R&D efforts within the private sector. … Cybersecurity is national security and must be prioritized as such.”

Tom Kellermann, senior vice president of cyber strategy at Contrast Security, told The Register that this is a “critical project to improve information sharing on TTPs [tactics, techniques, and procedures] and enhance situational awareness across American cyberspace. … However, ensuring the security of this ecosystem will be of paramount importance given the surge in integrity attacks and island hopping.” ®

——————————————————-


Click Here For The Original Source.

National Cyber Security

FREE
VIEW