UK tech tsar warns of AI cyberthreats posed to NHS | #hacking | #cybersecurity | #infosec | #comptia | #pentest | #ransomware

The UK government’s new artificial intelligence (AI) tsar Ian Hogarth has warned that cybercriminals could use AI to attack the National Health System (NHS). Hogarth, who is the chair of the UK’s “Frontier AI” task force, said that AI could be weaponized to disrupt the NHS, potentially rivalling the impact of the COVID-19 pandemic or the WannaCry ransomware attack of 2017. He highlighted the risks of AI systems being used to launch cyberattacks on the health service, or even to design pathogens and toxins. Meanwhile, advances in AI technology, particularly in code writing, are lowering the barriers for cybercriminals to carry out attacks, he added.

“The government is quite rightly putting these threats to the very top of the agenda, but technology leaders need to heed the warning and get moving, to better prepare for the next inevitable attack,” Hogarth told the Financial Times.

Announced by the Prime Minister in April, the UK government’s Frontier AI task force was established in June to lead the safe and reliable development of frontier AI models, including generative AI large language models (LLMs) like ChatGPT and Google Bard. It is backed with ?100 million in funding to ensure sovereign capabilities and broad adoption of safe and reliable foundation models, helping cement the UK’s position as a science and technology superpower by 2030.

International collaboration needed to address AI risks

The threats posed by advancing AI technology are fundamentally global risks, Hogarth said. “The kind of risks that we are paying most attention to are augmented national security risks. A huge number of people in technology right now are trying to develop AI systems that are superhuman at writing code. That technology is getting better and better by the day.”

In the same way the UK collaborates with China in aspects of biosecurity and cybersecurity, there is a real value in international collaboration around the larger scale risks of AI, he added. “It’s the sort of thing where you can’t go it alone in terms of trying to contain these threats.”

AI a “chronic risk” to UK national security

Last month, AI was officially classed as a security threat to the UK for the first time following the publication of the National Risk Register (NRR) 2023. The extensive document details the various threats that could have a significant impact on the UK’s safety, security, or critical systems at a national level. The latest version describes AI as a “chronic risk”, meaning it poses a threat over the long term, as opposed to an acute one such as a terror attack.

The UK government has committed to hosting the first global summit on AI Safety which will bring together key countries, leading tech companies and researchers to agree safety measures to evaluate and monitor risks from AI. The National AI Strategy, published in 2021, outlines steps for how the UK will begin its transition to an AI-enabled economy, the role of research and development in AI growth and the governance structures that will be required.

Meanwhile, the government’s white paper on AI, published in 2023, commits to establishing a central risk function that will identify and monitor the risks that come from AI. “By addressing these risks effectively, we will be better placed to utilise the advantages of AI.”


Click Here For The Original Source.

National Cyber Security