They Were Already Inside | CDOTrends | #ransomware | #cybercrime


The worst moment in a cyber incident is not when the ransomware screen appears. It’s really the second after, when your team reaches for the backup and realizes it’s already gone.

That inversion is no longer a thought experiment. It was the central, uncomfortable fact driving a sharp roundtable hosted by CDOTrends in partnership with Everpure (formerly Pure Storage) on April 9: “When the Backup Becomes the Breach: Rethinking Cyber Resilience in a Sovereign Data World.” Three practitioners sat down to work through it — Matthew Oostveen, vice president and chief technology officer for Asia Pacific and Japan at Everpure; Silvia Ihensekhien, director of information security and risk management at Swire Coca-Cola; and Dr. Shakti Goel, chief architect and data scientist at Yatra Online. Their combined take is not comforting but clarifying.

The backup is the target now

Oostveen began by explaining how we got here. For years, backup was the last line of defense — immutable, air-gapped, settled. Ransomware gangs have read that playbook. They rewrote it. Their version doesn’t start with the encryption payload. It starts with the backup environment, weeks or months before you know anything is wrong.

The rewrite runs on patience. Threat actors are no longer blitzing in, encrypting, and running. They dwell. They observe. They learn the rhythm of the business. “The average is sitting at around 200 days now,” Oostveen said. Two hundred days inside a network, quietly mapping backup servers, compromising admin credentials, deleting snapshots — before the encryption screen ever appears. By then, the safety net is already cut. “If you aren’t treating your backup environment as your most sensitive, high-value target, you’re essentially just leaving the keys to the vault under your doormat.”

The implication is architectural. Immutability — once the gold standard — is now table stakes. The question has shifted from whether your data was deleted to whether it was corrupted at the source — silently, weeks ago, waiting to be restored at exactly the wrong moment. What 2026 demands is continuous behavioral validation inside the data itself: the ability to catch corruption before a poisoned backup gets promoted back to production.

A second, slower threat is building alongside this one. As organizations race to build data lakes to feed their AI models, they’re assembling enormous, high-value repositories. The attackers know this. “These bad actors are acutely aware of the fact that we are building these honeypots,” Oostveen warned. And the attack mode shifts with it: instead of visibly encrypting data, an adversary who wants to undermine an AI system simply changes a small fraction of it — undetected. Feed a model on corrupted training data and the drift compounds. “If that’s changed and then trained upon… we’re going to get a level of drift… it would end up negatively affecting a share price.” That’s not a future problem but a present design failure.

Sovereignty is not where you think

Surviving a ransomware attack used to be the finish line. Now it may just be the start of a second crisis — a compliance disaster triggered by the recovery itself. Pull data across the wrong jurisdictional border in a panic, and your disaster recovery plan doesn’t save you. It becomes the liability.

Ihensekhien was direct. She runs security across IT and operational technology environments spanning multiple countries, each with distinct regulatory regimes, political constraints, and data protection laws. One-size-fits-all doesn’t cut it. “Cyber resilience cannot just be defined as simply keeping systems up or restoring data after an incident — it means the ability to maintain a safe, compliant and operationally viable business continuity across different jurisdictions.”

“If you aren’t treating your backup environment as your most sensitive, high-value target, you’re essentially just leaving the keys to the vault under your doormat.” — Matthew Oostveen @ Everpure

Oostveen pushed harder on a structural flaw most organizations haven’t confronted: the fiction of data residency. Providers market sovereign infrastructure. The infrastructure may still be legally reachable. Under the U.S. Cloud Act, agencies including the CIA, FBI, and NSA can compel access to data residing in overseas data centers through court orders — orders that may never be disclosed to the data owner. Oostveen has a name for what companies are buying instead: sovereign washing. Services “labeled as sovereign and sold as sovereign, but they’re truly not” because the underlying ownership structures reach across borders regardless of where the servers physically sit. Knowing which data center your data is in is not the same as knowing which international court has jurisdiction over it.

Dr. Goel is navigating this in real time. Yatra Online operates under India’s Digital Personal Data Protection Act, SOX compliance obligations as a NASDAQ-listed company, and the data requirements of major international corporate clients from Japan, the U.S., and the Middle East. His architecture is a three-layer system: isolated storage; a metadata layer that dynamically tags every backup with sovereignty, compliance, and residency attributes; and a policy engine that governs what can be restored where — automatically blocking moves that would violate jurisdictional constraints. The catch: the laws themselves keep changing. India’s DPDP doesn’t require full compliance until May 2027. The act has no provisions yet for agentic AI. Goel’s architecture must be built for regulatory versions that don’t exist. The goal, he said, is to be compliant by intention, not by forensics.

Accountability has no address

When a recovery action violates a data residency mandate, and no one finds out until after the fact, blame scatters fast. The CISO calls it an infrastructure decision. Infrastructure says it followed the playbook. The board says it wasn’t briefed. Goel named the pattern directly: “There is diffusion of accountability.”

His architectural answer is to make the backup carry its own governance. When sovereignty, compliance, and residency requirements are embedded as attributes at the metadata layer, the policy engine enforces them at recovery time — automatically, without requiring human judgment in the moment. The constraints are baked in before the crisis. The 2 a.m. decision gets made at design time instead.

Accountability diffusion is a cultural problem, too. Oostveen refused to frame it as pure friction. Most technologists who create these gaps are well-intentioned. They want to solve problems. They move fast. And they move alone. “Resilience isn’t just a technical checkbox; it’s a team sport.” Legal needs to be in the room. HR. The board. The COO. The individuals doing the work need the authority — and the mandate — to bring people along.

Ihensekhien, who spends a lot of her time not in the SOC but in boardrooms translating risk into decisions, knows this cold. Her advice: “Get the board or executive manager buy-in… be the best BFF of your CFO.” Technical implementation is rarely the hard part. “Technical is easy. You just collect all the logs. But how do you make this work within the components” — the legal requirements, the political constraints, the operational realities — that’s what teams routinely underestimate.

What to do on Monday morning

So, when asked what the best next step would be, Oostveen offered two.

First, kill single-point-of-failure admin permissions. No single individual should have god-mode authority to execute a mass delete inside a backup environment. Distribute the responsibility; require two keys to turn. Second, run an unannounced fire drill. “Do a zero-knowledge recovery. Don’t tell your team about it. Throw a non-critical, complex application. Assume your primary storage is gone, assume the admin credentials are compromised, assume the worst — and then get the team to recover it.” The gap between your documented resilience posture and your actual recovery capability is usually revealed right there.

Dr. Goel offered a 90-day roadmap built around attribute-based access control: 30 days to identify and classify all sensitive data across the environment; 30 days to build the metadata layer, dynamic tagging, and policy engine; and a final 30 days to harden the attributes so that a backup physically cannot move outside its jurisdictional boundary. Then run fire drills until the system proves it works and beyond.

Oostveen closed with advice on sovereignty for organizations that feel overwhelmed by the scope: don’t run it as a standalone project. “I recommend doing it as a plus project.” Layer sovereignty requirements onto cloud migration, application modernization, or infrastructure refresh already in progress. It becomes a new metric, not a new burden.

The harder question underneath everything — the one that closed the session — is not just whether you can recover. It’s whether you know what “clean” actually means, in real time, across multiple jurisdictions, when your AI agents may have already moved the data before the incident began.

Your board will ask that question, probably after something goes wrong. The point of conversations like this one is to be ready before they do.

Image credit: iStockphoto/boggy22



Click Here For The Original Source.

——————————————————–

..........

.

.

National Cyber Security

FREE
VIEW