ISJ hears exclusively from Trevor Dearing, Director of Critical Infrastructure at Illumio.
The European Parliament recently announced that it was disabling the AI features on tablets it provides to lawmakers.
Tools such as writing aids and virtual assistants were blocked due to AI using cloud services to perform tasks that could be handled locally, sending data off the device.
As stated by the Parliament’s cybersecurity and personal data protection teams, it was assessing the full extent of the data shared with service providers and, “until this is fully clarified, it is considered safer to keep such features disabled.”
The European Parliament’s actions demonstrate that it takes data privacy and security seriously and has real, working cybersecurity processes ready for action.
Why is turning off certain AI features the right move?
In a remarkably short period of time, AI has turned into an everyday productivity tool.
For many European lawmakers, it is fast becoming a default tool for tasks such as writing, research, translation and transcribing meetings.
Like every major technology shift, AI doesn’t just increase productivity, it expands the attack surface.
Many AI tools process large amounts of user data, interact with external cloud services, and store inputs that could include highly sensitive information.
Without proper visibility and control, this presents new risks of data leakage and exposure to external systems that organisations do not fully understand.
When confronted with this challenge, I’ve seen organisations typically take a black-or-white stance. They either impose a blanket ban on AI or have no rules at all.
Neither strategy reflects how modern technology, or modern threats, actually work.
Completely disabling access might eliminate the problem altogether, but it stifles productivity, and people will always find workarounds, creating an even bigger security problem.
On the other hand, doing almost nothing allows AI to wreak havoc by copying, storing, or sharing sensitive information when it shouldn’t, or without anyone knowing.
It opens the door to a security and privacy nightmare.
The European Parliament has taken a more mature and pragmatic approach to AI security.
By temporarily shutting down the features that pose a significant security risk; while leaving low-risk features and core services intact, it has demonstrated how to reduce exposure without bringing productivity to a halt.
This is exactly the kind of resilience we need to see more of.
We’re finally moving beyond the old mindset of “stop everything and pray” and into a new era of containment: acknowledging that risks exist but ensuring they are reduced and do not bring the whole organisation down.
What can organisations learn from the European Parliament?
As I mentioned earlier, most organisations take an unequivocal approach to security – did the threat enter our systems or not? This question is too simple to answer the nuances and complexities that exist within security.
And the limitation becomes even more pronounced when considering how AI systems operate.
As AI is designed to operate across multiple platforms, environments, and data sources, the blast radius of a compromise can be significant.
Agentic AI with considerable system access and autonomy poses an even greater risk.
Gravitee’s report highlighted that 88% of organisations reported AI agent security incidents in the past year.
This is why containment is so important.
Organisations are dependent on interconnected systems and continuous access to services, so they need to be able to protect the essentials, isolate risks and problems, and keep the business running.
Risks are inevitable, especially when adopting AI.
What matters most is whether organisations can manage those risks without disrupting everything else in the process.
Practically, this means implementing safe defaults and input validation, as well as enforcing restrictive permissions and escalation protocols.
It also means architecting technical boundaries that AI systems cannot cross, even if they are misused or compromised.
The best way to create limitations on AI is through microsegmentation.
This ensures that features and tools are confined to a narrow operational zone. They can only communicate with approved systems and access the data they are supposed to.
The security team can then have confidence that AI tools are only performing tasks explicitly defined as safe and access only the data they are explicitly authorised to use.
Furthermore, when updates are made to AI features that potentially violate the organisation’s risk tolerance, they can be quickly shut down without impacting other areas of the business.
It’s also where market adoption is heading. Illumio’s recent research shows organisations are increasingly turning to microsegmentation not just to prevent breaches, but to manage them, citing faster detection and response, and improved visibility as primary benefits.
Ultimately, with AI, organisations should always assume that compromise is inevitable, and architecting systems for containment helps ensure resilience.
How do organisations go about identifying risky data traffic and connectivity from AI tools?
As AI tools become more embedded across the enterprise, organisations need more precise ways to understand and control how those tools interact with their systems and data.
Visibility is the foundation of effective AI security.
Proactively gaining that visibility starts with understanding three things: which AI tools are being used across the organisation, what data employees are sharing with them, and how those tools connect to other internal and external services.
The problem is that achieving this level of visibility is often extremely challenging for security teams.
One of the biggest barriers is a tendency to track events in isolation, meaning they miss the subtle relationships in how AI interacts with systems and data.
A more effective approach is to build a unified view of AI activity using security graphs.
Graph-based security models map the relationships, between users, workloads, data and network connections, allowing teams to see what is happening, how and why.
When combined with AI-driven analytics, security teams gain the context needed to detect malicious or risky behaviour.
It then becomes much easier to understand what an AI tool is doing, what it is connected to and whether those connections align with organisational policy and risk tolerance.
It also becomes immediately clear when an AI tool is communicating with systems that potentially put the organisation at risk.
In the case of the European Parliament, and for many other organisations, that was using cloud services for tasks that could be performed locally.
Crucially, this insight enables action before damage is done.
Rather than relying on detection after sensitive information has already been exposed, organisations can intervene in real time by restricting connectivity, isolating the AI tool or disabling specific features as soon as risk thresholds are crossed.
Is the European Parliament’s approach the future of AI security?
The European Parliament’s response offers a clear signal of how cybersecurity is evolving.
The goal is not to eliminate all risk, but to ensure that AI-driven activity can be monitored, managed and contained when necessary.
We’re already seeing real world threats from AI systems operating beyond their intended boundaries.
An Alibaba-affiliated team found an AI agent attempting unauthorised cryptocurrency mining during training.
If internal security alarms hadn’t been triggered, they would have been left with a hidden backdoor into their systems.
At the same time, findings from Irregular Research showed that even without explicit malicious intent, AI agents can develop strategies that resemble offensive operations when optimising for goals.
We’re rapidly approaching an “AI event horizon” where the attacker’s advantage becomes nonlinear and security strategies that rely solely on detection or incident response will struggle to keep up.
Designing for containment is what allows organisations to operate confidently in that environment.
The European Parliament’s response is an early example of this shift. It won’t be the last.
As AI adoption accelerates, containment will move from being a best practice to a baseline requirement for organisations that want to remain both secure and productive.
Click Here For The Original Source.
