(844) 627-8267
(844) 627-8267

EU agency maps key cybersecurity issues on Artificial Intelligence – EURACTIV.com | #hacking | #cybersecurity | #infosec | #comptia | #pentest | #ransomware


The EU’s Cybersecurity Agency, ENISA, published a series of reports on the cybersecurity challenges for AI on Wednesday (7 June).

The reporters were published at the same time as the ENISA-organised AI Cybersecurity Conference, which addressed the cybersecurity implications in terms of AI chatbots, research and innovation, as well as the legal and industrial challenges.

“If we want to both secure AI systems and also ensure privacy, we need to scrutinise how these systems work,” said Juhan Lepassaar, Executive Director of the EU Agency for Cybersecurity.

EU countries give lukewarm reception to cyber defence strategy

EU defence ministers adopted on Tuesday (23. May) conclusions on cyber defence, pointing out the need to avoid duplications in the institutional architecture, and stating their priorities on skills development and voluntary coordination in the defence sector.

Privacy issues

ENISA also published reports focusing on the significant impact that AI has on security and privacy, taking as scenarios the forecasting demand on electricity grids and the domain of medical imaging diagnosis.

“While security and privacy are not necessarily the same, they are intimately related, and equally important,” one of the reports reads.

ENISA recommended that “the entire cybersecurity and privacy context (requirements, threats, vulnerabilities, and controls) must be adapted to the context and reality of the individual organisation”,

Research gaps

On the topic of the EU landscape on AI and cybersecurity in the domain of research and innovation, ENISA mapped the current state of play for AI and cybersecurity to identify potential shortcomings.

The analysis points to six gaps in research innovation, among them the lack of adequate information and knowledge on the potential of AI solutions for cybersecurity, adequate documentation of deployment projects, and demonstration activities.

Also raised was that a minority of prototypes refined in the context of research and development (R&D) that enter the market, a perception gap between the research and the business community, and limited capacity of such projects to solve existing and emerging problems.

“While the impact of AI on the overall risk landscape brings challenges and opportunities, securing AI and AI-specific vulnerabilities are both organisational and R&D challenges,” said Henrik Junklewitz from the European Commission’s research department.

According to its report on Artificial Intelligence and Cybersecurity Research, ENISA plans to develop a roadmap and establish an observatory for cybersecurity R&D with a focus on AI.

Secured systems

Part of the conference also covered best practices to secure AI systems, with the example of the role of the German Federal Office for Information Security, BSI. The authority shapes information security for digital technologies, including AI.

“We have to develop practical criteria, therefore, AI has to be considered in the use case system this is what we have to take into account when we consider securing AI systems,” stated Arndt von Twickel, of the Federal Office at the BSI.

Given the complexity of the lifecycle of an AI system, new vulnerabilities arise. To secure an AI system successfully, all phases, covering planning, data, training, evaluation, and operation have to be considered.

“We look into fundamental properties of AI in different domains. Our contribution will be 1) to develop domain- and use-case-specific documents and technical guidelines, 2) to update the generalised AI model, and to use results from the first two points to contribute to standardisation, regulation and consulting,” von Twickel elaborated.

Skills gap puts EU cybersecurity rule compliance to the test

A new regulatory framework to increase cybersecurity resilience is falling into place at the EU level, but it risks exposing the growing shortage of cyber-talent in regulators and companies.

A number of new regulatory requirements are set to enter into force …

Good practices

ENISA also dedicated a report on a Multilayer Framework for Good Cybersecurity Practices for AI.

The report looks into three layers of AI: basic cybersecurity relevant to AI, AI-specific cybersecurity and sector-specific cybersecurity for AI for the audience of AI stakeholders and national competent authorities (NCAs).

“There are different thresholds, start-ups are coming up with good solutions. The challenge is how do we put the threshold and how do we regulate the big players and the non-EU players?” pointed out Rafael Popper, a researcher at the University of Turku.

While differentiating between various stakeholders, the report finds that the EU institutions and its member states need to collaborate to provide for a globally accepted ethical framework to develop universal acceptable measures.

“Regulation is coming in one way or another. The key word here is trusted AI. The EU has a chance now to make them trustworthy. It should be seen as an opportunity, not a challenge,” Junklewitz added.

[Edited by Luca Bertuzzi/Nathalie Weatherald]

Read more with EURACTIV



——————————————————-


Click Here For The Original Source.

National Cyber Security

FREE
VIEW