For most of the last decade, security teams haven’t had the bandwidth to think much about how they secure customer and employee experience platforms. And that made sense. Collect feedback, generate a report, pass it to a human to act on. The risk profile was low.
But that calculation no longer holds. These platforms now connect directly to HR software systems, CRM databases and compensation engines.
Article continues below
Chief Security Officer, Qualtrics.
In the world of agentic experience management programs, they have growing access to business-critical operations, and that means they can no longer be treated as simple survey tools.
Given the world we now live in, security teams must consider how they govern these systems that have access to sensitive data.
The exposure surface is bigger than you think
What makes this harder is the sensitivity of the data itself. Customer experience program shape pricing and product decisions. Employee experience programs surface concerns about leadership and workplace safety, feeding directly into HR decisions. But, unlike other cases, it is hard to pinpoint this data as Personally Identifiable Information (PII) or other easy to identify sensitive information.
Then there’s the shadow AI problem. Half of employees now use AI tools regularly at work, but only 20% stick to company-approved ones. That means sensitive experience data is already moving through workflows security teams don’t know exist, but banning tools outright removes your visibility of the risk rather than eliminating it.
If you’re deploying AI in customer-facing environments, these are the areas I’d focus on:
1. What is your platform actually connected to and what decisions does it influence? Most teams have mapped integrations at a technical level. Fewer have mapped the business decisions downstream of those integrations, including automated workflows. If you can’t answer this confidently, you have a meaningful gap regardless of your compliance set up.
2. How are you validating the integrity of inputs? Is your feedback data authentic, as well as complete? Could it be manipulated to skew a business outcome? This requires moving beyond standard input validation into intent and anomaly detection.
3. How quickly can you detect and act when something goes wrong? As AI systems become more agentic, continuous monitoring isn’t optional. You need mechanisms that flag abnormal outputs and allow you to intervene before a misconfigured or manipulated AI agent compounds the problem at scale.
When AI fails in public, it fails fast
Trust with customers is built over years and lost in seconds. I watched a major retailer spend years rebuilding customer trust after a data breach in the early 2010s. This matters even more in an AI world, where people want to be confident the data they’re sharing is protected at the highest levels.
Our research puts numbers to what many security leaders already sense. 53% of consumers say misuse of personal data is their top concern when companies use AI to automate interactions, and this is up eight points in the past year. Two-thirds want personalized experiences, but only 40% think the benefits are worth the privacy trade-offs. Nearly half say they’d share more data if organizations were simply more transparent about how it’s used.
In a world that’s actively researching how to use AI, trust in AI solutions is the main driver of adoption for consumers and companies. Managing this trust is going to make or break the companies that are vying for their customers’ engagement.
Most organizations have mapped their technical blast radius: which systems connect, which APIs are open, where data flows. Fewer have mapped their business blast radius: the real cost if the data is wrong, biased, or manipulated. When a chatbot hallucinates a refund policy, exposes personal data, or fabricates an answer, it is a brand failure directly in front of customers. One poorly tested AI agent can damage thousands of customer relationships before anyone in the business notices.
The conversation security leaders need to be driving is “how do we monitor it continuously once it’s live?”
Security is a commercial factor
Businesses are under real pressure to move fast. Stakeholders want transformation, CX teams want automation, and security teams raising concerns about bias, compliance and data exposure get positioned as the blocker. I understand the frustration on both sides, but the goal should be working together, not one side inheriting the decision.
The organizations getting this right are embedding security into platform defaults, so guardrails are already in place when a team spins up a new integration. Platform vendors need to do more here too: clearer visibility into what’s connected, what permissions are active, and when integrations were last reviewed.
I’ve seen this shift happen in real conversations. Security leaders who can demonstrate rigorous controls, monitoring, validated data practices, and certifications, find that procurement conversations accelerate and timelines compress.
While security has spent decades being framed as a cost center, in a world where experience platforms are AI-powered and connected to the most sensitive operations across a business, that framing is outdated.
When security is visible and credible, employees and customers feel more comfortable sharing their data. This produces sharper AI outputs and builds trust. It all compounds.
LINK!
Click Here For The Original Source.
