Determining the ROI for any cybersecurity investment, from staff training to AI-enabled authentication managers, can best be described as an enigma shrouded in mystery. The digital threat landscape changes constantly, and it’s very difficult to know the probability of any given attack succeeding — or how big the potential losses might be. Even the known costs, such as penalties for data breaches in highly regulated industries like health care, are a small piece of the ROI calculation. In the absence of good data, decision makers must use something less than perfect to weigh the options: their judgment.
But insights from behavioral economics and psychology show that human judgment is often biased in predictably problematic ways. In the case of cybersecurity, some decision makers use the wrong mental models to help them determine how much investment is necessary and where to invest. For example, they may think about cyber defense as a fortification process — if you build strong firewalls, with well-manned turrets, you’ll be able to see the attacker from a mile away. Or they may assume that complying with a security framework like NIST or FISMA is sufficient security —just check all the boxes and you can keep pesky attackers at bay. They may also fail to consider the counterfactual thinking — We didn’t have a breach this year, so we don’t need to ramp up investment — when in reality they probably either got lucky this year or are unaware that a bad actor is lurking in their system, waiting to strike.
The problem with these mental models is that they treat cybersecurity as a finite problem that can be solved, rather than as the ongoing process that it is. No matter how fortified a firm may be, hackers, much like water, will find the cracks in the wall. That’s why cybersecurity efforts have to focus on risk management, not risk mitigation. But this pessimistic outlook makes for a very tough sell. How can security executives get around the misguided thinking that leads to underinvestment, and secure the resources they need?
Over the past year, my behavioral science research and design firm, ideas42, has been interviewing experts across the cybersecurity space and conducting extensive research to identify human behavioral challenges at the levels of engineers, end users, IT administrators, and executives. We’ve uncovered insights about why people put errors into code, fail to install software updates, and poorly manage access permissions. (We delve into these challenges in Deep Thought: A Cybersecurity Story, a research-based novella.) Our findings point to steps that security executives and other cybersecurity professionals can take to work around CEOs’ human biases and motivate decision makers to invest more in cyber infrastructure.
Appeal to the emotions of financial decision makers. The way that information is conveyed to us has a huge effect on how we receive and act on it. For cybersecurity professionals, it’s intuitive to describe cyber risk in terms of the integrity and availability of data, or with quantifiable metrics like packet loss, but these concepts aren’t likely to resonate with decision makers who think about risk very differently. Instead, cybersecurity professionals hsould take into account people’s tendency to overweight information that portrays consequences vividly and tugs at their emotions. To leverage this affect bias, security professionals should explain cyber risk by using clear narratives that connect to risk areas that high-level decision makers are familiar with and already care deeply about. For example, your company’s risk areas may include customer data loss as well as the regulatory costs and PR fallout that can affect the company’s reputation. It’s not just about data corruption — it’s also about how the bad data will reduce operational efficiency and bring production lines to a standstill.
Replace your CEO’s mental model with new success metrics. Everyone uses mental models to distill complexity into something manageable. Having the wrong mental model about what a cybersecurity program is supposed to do can be the difference between a thwarted attack and a significant breach. Some CEOs may think that security investments are for building an infrastructure, that creating a fortified castle is all that’s needed to keep a company safe. With this mental picture, the goals of a financial decision maker will always be oriented toward risk mitigation instead of risk management.
To get around this, CISOs should work with boards and financial decision makers to reframe metrics for success in terms of the number of vulnerabilities that are found and fixed. No cybersecurity system will ever be impenetrable, so working to find the cracks will shift leaders’ focus from building the right system to building the right process. Counterintuitively, a firm’s security team uncovering more vulnerabilities should be considered a positive sign. All systems have bugs, and all humans can be hacked, so treating vulnerabilities as shortcomings will create an unintended incentive for an internal security team to hide them. Recognize that the stronger the security processes and team capabilities are, the more vulnerabilities they’ll discover (and be able to fix).
Survey your peers to help curb overconfidence. Overconfidence is a pervasive bias, and it can be a big problem if it clouds leaders’ judgment about cybersecurity investment. Our research found that many C-level executives believe that their own investments in cybersecurity are sufficient but that few of their peers are investing enough (a belief that, given how widespread it is, can’t possible be true). One way that CISOs can overcome a CEO’s overconfidence is to compare the company’s performance with a baseline from similar firms — in other words, confront the problem head-on. You can accomplish this by regularly polling CISOs and executives about how well organizations in your industry are managing cybersecurity infrastructure, prompting them to be as specific as possible about what they are doing well and what they’re not, and asking those same CISOs to help determine how well your own firm is doing. This way, CISOs can provide clearer information to CEOs about how they are actually performing relative to their industry peers.
“You are the weakest link.” In her essay “Regarding the Pain of Others,” Susan Sontag wrote, “To photograph is to frame, and to frame is to exclude.” Human attention functions quite similarly. People concentrate on certain aspects of information in their environment while ignoring others; what a CEO chooses to invest in can be thought of in a similar light. For instance, in the wake of a newsworthy hack, CEOs may push their teams to ramp up investment in cyber infrastructure to protect against external threats. But in doing so they may be inattentive to unwitting internal threats that may be just as costly — employees clicking on bad links, or falling for phishing attacks.
How can a CISO work around a decision maker’s inattention? No one likes to be embarrassed, but negative feedback can sometimes be an effective remedy for inattention. Security teams should regularly try to break their own systems through penetration testing, and the CEO should be the biggest target. After all, that’s how outside hackers would see it. By making the CEO the victim of an internally initiated (and safe) attack, it might be possible to draw their attention to potential risks that already exist and motivate leaders to increase their investment in cyber infrastructure.
If the focus of cybersecurity programs continues to be on designing better technologies to combat the growing menace of cyberattacks, we’ll continue to neglect the most important aspect of security — the person in the middle. By turning the lens of behavioral science onto cybersecurity challenges, CISOs can identify new ways to approach old problems, and maybe improve their budgets at the same time.