An evolving threat landscape
Prof. Renato Cuocolo
The threat these new techniques pose to healthcare institutions should not be underestimated, stressed Prof. Renato Cuocolo, radiologist at the University of Salerno. The damage from data poisoning, for example, takes considerable resources to undo: ‘Once the model has been poisoned, we cannot just go and excise the poisoned data after the fact,’ he explained. ‘We need to retrain the model from scratch, reimplement it from scratch, and validate again. Obviously, this has an order-of-magnitude higher cost compared to traditional software, which can just be straightforwardly patched.’
Furthermore, this vulnerability could be used to escalate the threat level of the already-feared ransomware attacks: Rather than encrypting it, an attacker could corrupt just a small percentage of a hospital’s files – without a way of knowing which data is true and which is fake.
Another new vulnerability opened up by LLM technology is the possibility of inversion attacks, Cuocolo continued: ‘If we use generative AI to produce synthetic data for research or training purposes, we have to be aware that certain kinds of prompts can be used to match the generated data a bit too closely.’ For example, an attacker may ask the AI model to “generate a brain MRI of a 40-year-old male with Glioblastoma from Hospital X” – if the model overfits, users could extract not only personal information, but also recognisable imaging information of a real patient that has been used in the training data. ‘The model itself becomes an access point,’ the expert pointed out. ‘And it is more easily accessible than the original data.’
