Secure.com has published a guide on when artificial intelligence helps and hurts corporate security teams. It offers a framework for chief information security officers to assess where AI should be used in security operations.
The Dubai-based cyber security company focuses on the balance between automation and human judgement in security operations centres, where teams face heavy alert volumes and growing pressure to respond quickly.
Research cited in the guide shows that 76% of CISOs expect a material cyberattack within the next 12 months, while most organisations already use AI in some form. Secure.com argues that the key question is no longer whether to adopt AI, but where it adds value and where it creates risk.
The publication covers alert triage, phishing detection, shadow AI, governance and the metrics security teams can use to judge whether an AI deployment is working. It also warns against over-reliance on systems whose decisions cannot be explained or reviewed by human operators.
Uzair Gadit, Founder & Chief Executive Officer of Secure.com, said the issue is one of judgement rather than simple adoption.
“Security vendors will tell you AI solves everything. Your analysts will tell you it sometimes makes things worse. Both are right. The real job is knowing which situation you are in because the problem CISOs face isn’t whether to deploy AI, it’s misplaced trust,” he said.
Where it helps
One section argues that AI is most useful for repetitive, high-volume work rather than final security decisions. In a typical security operations centre, teams can face more than 1,000 alerts a day, many of them low-risk or false positives.
The guide says AI-driven triage can reduce manual triage workload by 70%. It also cites studies showing that mean time to detect can fall by 30% to 40%, while mean time to respond can drop by 45% to 55% when AI is used to sort, enrich and correlate incoming alerts.
Pattern recognition is another area the guide highlights. AI is well suited to scanning large datasets for phishing attempts and anomalous user behaviour, tasks that become difficult for human analysts at scale.
Where it hurts
The guide also warns that AI can create problems when organisations trust it too readily. The main risk, it says, is not visible system failure but quiet failure, when a model misses something and the omission goes unnoticed until after a breach.
AI systems remain limited by their training data and are less reliable when faced with new attack methods outside familiar patterns. The document cites findings that the time between vulnerability disclosure and active exploitation has shortened sharply, increasing the risk for teams that depend too heavily on automated tools.
Internal use of unapproved AI tools is presented as a separate concern. The guide describes shadow AI as a governance problem rather than purely a technology issue, with staff potentially exposing sensitive company data through chatbots, browser tools or other assistants without formal oversight.
It cites research showing that nearly 22% of employees have unrestricted access to publicly available AI tools at work. That can create exposure risks that standard detection systems may not reveal on their own.
Governance questions
A central part of the guide is a set of four questions organisations should ask before rolling out any AI security product. They cover whether the tool reduces real risk, whether teams can explain its decisions, whether mistakes can be reversed, and who owns and maintains the system once deployed.
The report argues that explainability, auditability and reversibility should be treated as minimum requirements. In practice, that means security teams should be able to trace why a model made a recommendation, log what actions were taken, and roll back automated responses if they prove wrong.
Secure.com also recommends starting with a single use case rather than broad deployment across the security estate. It points to automated alert triage as the most practical starting point, with results measured through standard metrics such as mean time to detect, mean time to respond and false positive rates.
Human oversight
Secure.com’s position is that AI should handle volume while people retain responsibility for judgement. The guide says automated systems are best suited to triage, enrichment, correlation and initial prioritisation, while human analysts should continue to control complex investigations, incident response decisions and cases involving critical business systems.
That distinction reflects a wider debate in cyber security over whether AI will ease the burden on stretched teams or create a new layer of operational risk. Secure.com’s analysis comes down firmly in favour of selective deployment backed by strong internal controls.
The guide concludes that organisations should set AI governance rules before employees or teams create informal workarounds. Policies, it says, should spell out which tools are approved, what data they can access and who is accountable when something goes wrong.
Click Here For The Original Source.
