Trend Micro has published its latest State of AI Security Report, highlighting how the pace of artificial intelligence development is contributing to new cybersecurity vulnerabilities in critical infrastructure.
The report details a range of security challenges faced by organisations as they deploy AI technologies, including vulnerabilities in key components, accidental internet exposure, weaknesses in open-source software, and issues with container-based systems.
Critical vulnerabilities
The research identifies vulnerabilities and exploits in vital parts of AI infrastructure. Many AI applications rely on a blend of specialised software, some of which are susceptible to the same flaws as traditional software. The report notes the discovery of zero-day vulnerabilities in components such as ChromaDB, Redis, NVIDIA Triton, and NVIDIA Container Toolkit, posing significant risks if left unpatched.
In addition to these, the report draws attention to the exposure of servers hosting AI infrastructure to the public internet, often as a result of rapid deployment and inadequate security measures. According to Trend Micro, more than 200 ChromaDB servers, 2,000 Redis servers, and over 10,000 Ollama servers have been found exposed without authentication, leaving them open to malicious probing.
Open-source and container concerns
The reliance on open-source components in AI frameworks is another focus for security risks. Vulnerabilities may go unnoticed when they are integrated into production systems, as demonstrated at the recent Pwn2Own Berlin event. Researchers there identified an exploit in the Redis vector database, attributed to an outdated Lua component.
Continuing the theme of infrastructure risk, the report discusses the widespread use of containers in AI deployments. Containers, while commonly used to improve efficiency, are vulnerable to the same security issues that plague broader cloud and container environments. Pwn2Own researchers also discovered an exploit targeting the NVIDIA Container Toolkit, raising concerns about container management practices in the deployment of AI technologies.
Expert perspectives
AI may represent the opportunity of the century for ANZ businesses. But those rushing in too fast without taking adequate security precautions may end up causing more harm than good. As our report reveals, too much global AI infrastructure is already being built from unsecured and/or unpatched components, creating an open door for threat actors.
This statement from Mick McCluney, Field CTO for ANZ at Trend Micro, underscores the importance of balancing innovation in AI with a robust approach to cybersecurity.
Stuart MacLellan, Chief Technology Officer at NHS SLAM, also shared perspectives on the organisational implications of these findings:
There are still lots of questions around AI models and how they could and should be used. We now get much more information now than we ever did about the visibility of devices and what applications are being used. It’s interesting to collate that data and get dynamic, risk-based alerts on people and what they’re doing depending on policies and processes. That’s going to really empower the decisions that are made organisationally around certain products.
Recommended actions
The report sets out several practical steps organisations can take to mitigate risk. These include enhanced patch management, regular vulnerability scanning, maintaining a comprehensive inventory of all software components, and adopting best practices for container management. The report also advises that configuration checks should be undertaken to ensure that critical AI infrastructure is not inadvertently exposed to the internet.
The findings highlight the need for the developer community and users of AI to better balance security with speed to market. Trend Micro recommends that organisations exercise due diligence, particularly as the adoption of AI continues to rise across various sectors.