Shadow AI Spies on Companies, Security Team is the Main Actor #AI

[ad_1]

JAKARTA – The “Shadow AI” phenomenon is now a new threat to global companies, and ironically, the main perpetrators actually come from the cybersecurity team itself.

The term Shadow AI refers to the use of artificial intelligence tools without the official consent of the organization. This practice is growing rapidly as the need for work efficiency increases, but carries serious risks to data security, compliance, and company reputation.

The latest report from UpGuard shows that almost 90 percent of security professionals are known to use AI tools that are not approved by the office. This fact shows that even the division in charge of maintaining the system is starting to take shortcuts for productivity.

From Quick Solutions to a Big Risk

At first glance, using AI to speed up tasks seems like a logical decision. A three-hour job can be completed in just a few minutes. However, AI is not just a data storage tool like a regular cloud service. This system processes, studies, and in some cases stores the data entered by the user.

This is where the danger lies. If employees enter internal documents, source code, or sensitive information into an unsupervised AI platform, the data could potentially be exposed to third parties.

The financial risk is also significant. Research from Netwrix shows that companies with high levels of Shadow AI use incur data breach costs of more than 600,000 US dollars compared to organizations that only use official devices.

The problem is not just data leaks. AI can also generate false information or “hallucinations” that then enter executive reports, business analysis, or even production code. When the results of AI are trusted raw without human verification, small errors can turn into big losses.

New Threat: Agentic AI

The risk of Shadow AI is now evolving to the next level through what is called “Agentic AI.” Unlike a regular chatbot, agentic AI is able to take direct action – reading emails, moving files, and running code on behalf of the user.

One of the most talked about examples is OpenClaw. This kind of platform offers extreme productivity, but also opens up new attack vectors.

Security researchers recently discovered a community extension for OpenClaw that turned out to be a disguised malware, secretly sending data to an unknown server. Since the system acts using the user’s original permissions, suspicious activity often escapes the detection of traditional security devices.

In short: AI can now become a “digital insider” that is difficult to distinguish from a real employee.

Total Ban is Not the Solution

Efforts to ban the use of AI in the workplace have proven ineffective. Almost half of employees admit that they will still use their favorite AI tools even though the company prohibits it explicitly. The ban only encourages the practice to go underground – it’s harder to monitor and more dangerous.

A more realistic approach is to provide secure and job-appropriate official AI alternatives. One health care system successfully reduced the use of unauthorized AI to nearly 90 percent after providing an approved internal platform.

Follow VOI Whatsapp Channel


The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language.
(system supported by DigitalSiber.id)



[ad_2]

Click Here For The Original Source.

——————————————————–

..........

.

.

National Cyber Security

FREE
VIEW