
As artificial intelligence tools proliferate within organizations, security risks are surging, complicating the already challenging landscape of cyber defense. Businesses eager to harness AI’s promise are finding themselves unprepared to manage threats ranging from data leakage to sophisticated phishing attacks, according to a new report from the cybersecurity firm Check Point Software Technologies.
Check Point surveyed more than 1,000 security professionals, finding that 47% had detected employees uploading sensitive data to AI tools within the past year. The report underscores a growing sense of urgency among CISOs and IT leaders, as companies’ embrace of generative AI often outpaces the deployment of robust guardrails.
“Many organizations are struggling to implement adequate controls,” said Sergey Shykevich, threat intelligence group manager at Check Point. “The boundary between employee curiosity and dangerous data exposure is thin.”
The rapid pace of AI adoption offers new attack surfaces for threat actors, who are evolving their tactics to exploit the growing reliance on large language models and automation. A majority (70%) of respondents said criminals are already leveraging generative AI to facilitate phishing and social engineering campaigns, lending their attacks increased sophistication and credibility.
Security leaders are reporting concrete consequences. Around 16% of those surveyed said their companies had suffered incidents of data leakage in the past year that were directly linked to the use of generative AI applications. In some cases, employees had inadvertently input confidential information—such as customer records, source code, or strategic documents—into external AI services, exposing it to unintended parties.
This type of incident is not merely hypothetical. In March, Samsung disclosed that it had banned staff from using ChatGPT after an engineer uploaded sensitive internal code to the tool. Since then, a roster of firms spanning banking to defense have issued similar directives, with some creating bespoke AI in-house to avoid sharing data with external providers.
Yet, even companies with strict policies often struggle to keep tabs on the web of third-party AI tools entering their networks. “Shadow AI,” as described in the Check Point report, refers to employees bypassing official channels to tinker with AI models, sometimes for benign purposes—such as drafting emails or summarizing meeting notes—but not always with security in mind.
For many IT teams, enforcement remains a hurdle. “If employees can use something that’s going to make them more productive, the risk is they’ll find a way, regardless of corporate policy or training,” says Lisa Plaggemier, executive director at the National Cybersecurity Alliance.
Indeed, only 28% of those polled by Check Point said their organizations had comprehensive, up-to-date policies specifically governing the use of generative AI. For the remainder, security monitoring is often reactive, addressing incidents as they arise rather than preventing them.
Meanwhile, attackers are innovating. The report highlights a new tier of cyber threats enabled by AI tools, including deepfake videos and voice impersonation, which can be deployed in spear-phishing schemes to manipulate employees or executives. Detection and attribution become more challenging, as signs of malicious AI-generated content are easily overlooked.
Regulatory bodies are taking notice. In the U.S., the Securities and Exchange Commission has pressed companies to disclose AI and cyber-related risks, while the European Union’s AI Act, passed in March, sets strict standards for transparency and accountability.
Still, the report suggests practical steps for risk mitigation. These include employee education on AI-specific threats; greater investment in data loss prevention technologies capable of monitoring AI tool usage; and the adoption of approved, organization-hosted AI solutions that keep sensitive information within trusted environments.
The stakes, experts caution, are only getting higher. “Generative AI is not just a risk amplifier—it’s a structural change in how information flows through a company,” Shykevich said. “Defenses need to evolve just as quickly as the technology itself.”
Click Here For The Original Source.