Proofpoint outlines enterprise AI security controls for tools, agents, and data #AI

[ad_1]

Enterprise AI security requires better visibility across users, agents, data, and connected systems.

Cyber security concept.Businessman protects personal and corporate data.Internet network security and data protection.

Proofpoint has outlined an AI security approach that brings together collaboration security, data protection, AI governance, and runtime controls as enterprises adopt AI tools and agents across their environments.

During a recent media briefing, Jennifer Cheng, Director of Cybersecurity Strategy, APJ at Proofpoint, said the company is focusing on the intersection of people, data, and AI as work expands beyond email and traditional collaboration channels.

“Humans are not working alone anymore. They are working alongside AI tools and autonomous agents,” Cheng said. “We believe the future is looking at humans, agentic AI, and systems all working together in what we are defining as the agentic workspace.”

Proofpoint’s business has grown since it was taken private by Thoma Bravo in 2021, according to Cheng. The company now serves nearly three million customers worldwide, including large enterprises, government agencies, and public-sector organizations. In APJ, its team has grown to more than 300 people, about three times larger than in 2019.

Proofpoint continues to process email data at large scale, with visibility across trillions of emails, Cheng said. While the company remains known for email security, she said email has become more than a communication channel. It has also become an identity artifact that attackers use to target organizations through phishing, business email compromise, and account takeovers.

Proofpoint’s current focus extends across collaboration tools, SMS messaging, phishing simulations, cloud accounts, insider threats, and data protection. The company has integrated several acquisitions into its platform to support data security and governance, including data loss prevention, Cheng said.

The briefing also covered Proofpoint Nexus, the company’s detection platform. Nexus uses data from across Proofpoint’s systems to support detection models and help organizations understand risk across users, data, and AI activity, according to Cheng.

Tim Choi, Group Vice President of Product Marketing at Proofpoint, said enterprise AI adoption is creating three main security concerns: how users access AI tools, how organizations build and deploy AI agents, and how AI tools connect to enterprise systems and data.

Proofpoint’s research found that 68% of employees admitted to using AI tools that were not approved by their employers, Choi said. These tools include both web-based services and software installed on endpoints, such as desktop AI applications and AI-enabled browsers.

“The first question many security professionals have is, what are those tools, and what are my users utilizing to achieve their work?” Choi said.

Visibility into prompts, responses, and connections is important because AI interactions can include attempts to extract information, bypass guardrails, or return unsafe output, he said. AI tools may also connect to messaging systems, middleware, repositories, or business data.

Asked what first steps organizations should take, Choi said companies should start with governance before deploying technical controls. “The organization must have an AI safe usage policy document,” he said, adding that business and functional teams should agree on how AI is used before controls are mapped to risk scenarios.

Choi said AI agents add another layer of risk because their actions are not limited to a single prompt and response. Agents can call language models, MCP servers, tools, and services across multiple steps.

“Every micro-step could introduce risk, which enhances the importance of understanding what is happening inside that agent,” he said.

Proofpoint’s AI security portfolio includes AI Security for Access, AI Security for Agents, and AI Security for MCP. AI Security for Access focuses on discovering AI tools, controlling usage, and monitoring prompts, replies, links, content, and payloads, according to Choi. AI Security for Agents provides visibility into agent behavior and applies guardrails and runtime controls, while AI Security for MCP acts as a gateway between AI tools and enterprise systems.

Existing security tools remain part of enterprise AI security planning, Choi said. Proofpoint is in discussions with industry peers on integrations through MCP servers, which can link security tools and support faster retrieval of information across connected systems.

Data exposure remains a concern

Richard Combes, Head of Data Security Sales Engineering for EMEA and APJ at Proofpoint, said data security has become harder as data volumes increase and AI tools access more enterprise content.

“We are expecting a 300% growth in the amount of data over the next five years,” Combes said. “More data will be touched by more systems at machine speed.”

The main risks include data loss across multiple channels, excessive internal file access, insider misuse, and GenAI applications exposing sensitive data at scale, according to Combes. Shadow AI is a concern because employees may use unsanctioned tools that are outside company-approved contracts and controls.

Asked about first steps, Combes said organizations should map their AI data early. That includes identifying what AI tools are in use, what data they access, where that data comes from, who owns it, and what outputs and logs are created. Those steps should sit alongside governance policies, access controls, guardrails, and regular risk checks, he said.

He cited an example from New South Wales, where a contractor working on a flood recovery program reportedly entered a spreadsheet of about 3,000 names into ChatGPT to help format and extract data. According to Combes, the file included contact information and, in some cases, personal health information.

Combes demonstrated Proofpoint’s AI data governance module, which shows approved and shadow AI applications, risky prompts, uploaded files, connected repositories, and users driving higher exposure. The system can also identify AI tools connected to platforms such as SharePoint and allow those connections to be revoked, he said.

He also showed how the platform handles sensitive data shared with AI tools. In one example, a sanctioned AI tool was allowed to process code, but a plain-text password was redacted before submission.

“The goal is not to block all AI usage, but to stop sensitive data from going into it,” Combes said.

Asked where organizations still lack visibility, Cheng said many are looking closely at agents and AI, but the wider issue is how AI affects existing gaps. “What AI really does is accelerate threats,” she said. “It makes the gaps bigger, the threats more prevalent, and the volume higher.”

Organizations should assess whether existing tools address current risks, while also looking at behavior, intent, and interactions across humans, agents, AI systems, and communication channels, according to Cheng.

[ad_2]

Click Here For The Original Source.

——————————————————–

..........

.

.

National Cyber Security

FREE
VIEW