The best laid plans are often fraught with mistakes – some big, some more nuanced. Evan Schuman looks at where CISOs can fall short.
Enterprise security today, at least at the $4 billion annual revenue level and up, is in a precarious place. Despite GDPR and best security practices insisting on having complete global datamaps, hardly any such firms have them. Cloud use is just about universal, but the assets moving to the cloud is sharply increasing ever year. And yet, cloud security – or, more precisely, unwarranted reliance on presumed cloud security – is abysmal.
Shadow IT is getting far
worse, even though it’s been around for well more than a decade, IoT usage is
soaring with security protections often barely negligible and CISO to board
communications are hardly improving. Much of the board problems involve
communications. But as bad as board communications are, CISO to LOB communications
are widely regarded as even worse.
Access to sensitive information is being
shared more widely with all manner of partners and even some prominent
customers, with woefully inadequate means to guarantee security compliance.
CISOs are regarded as overly resistant to change, while some CISOs are more
worried that they are being pushed to use in proven new technology too quickly.
As the old joke says, “Other than that, Mrs.
Lincoln, how was the play?”
The area where most security consultants
expressed the greatest concern is accurate and comprehensive datamaps, aka,
asset management. In short, the criticism is that hardly any large enterprises
know where all of their sensitive data is, what cloud environments they
control, what mobile devices exist in the company and what data those devices
hold and they often have barely a clue about what applications the company is
using. If security and IT don’t know where all their data is, the argument
continues, how can they possibly protect it or even know if it’s been breached?
Until some third-party stumbles on the stolen data in the dark web or law
enforcement finds it on the server of an arrested cyberthief, these security
incidents are often secrets from the very people paid to protect that data.
“Some 100 percent of (CISOs) have apps in the
cloud that they don’t even know exist,” says Gartner Research Director Sam
Olyaei. That is a combination of shadow IT (employees purchasing their own
cloud environments for working on their own or their business unit’s projects –
without telling IT or security) and the nature of many major cloud
environments, which use their own apps in addition to the ones the enterprise
Joe Nocera, a principal
in cybersecurity and privacy with PwC, the consulting firm more commonly known
as PricewaterhouseCoopers, agrees with Olyaei. He sees large enterprise CISOs
still struggling with asset inventory. Part of that problem is that datamap
perfection – knowing exactly where all data, apps and infrastructure are and
what they all contain – is arguably impossible and that discourages many CISOs
from seriously trying, especially when they are overwhelmed with other tasks
that they can in fact master.
“They tell me, ‘I’ll
never have 100 percent complete and accurate’ and I tell them ‘So strive for
that 99 percent,’” Nocera says. “They are being hit by a misconfiguration or a
patch to a system they didn’t even know was running on that software. They
either get overwhelmed by the size and the challenge and they don’t even start
or they try and do too much and they don’t accomplish anything.”
Another analyst who points to insufficient datamaps as
being a key CISO mistake is Andrew Morrison, who leads strategy, defense and
response for Deloitte Cyber in the U.S. Part of the negative consequences
materializes with vulnerability management of applications. CISOs “are sitting
on a timebomb, They can only fix or patch so many things,” Morrison says. “It’s
a whack-a-mole game of patch configuration.”
PwC’s Nocera offers one solution and that is to “leverage
big data techniques to better (handle) inventory and discover the environment.
Any type of way to continually monitor new assets.” Some enterprises have toyed
with using a modification of security’s continuous authentication to constantly
watch the network for any new executables or even IoT devices – whether installed by friend or foe.
Nocera says that could help with asset inventory as well, particularly when
trying to identify assets or programs created via shadow IT.
CISOs “can’t depend on the rest of IT telling you,” Nocera
says, adding that they also suffer from a sharp lack of resources. Also, he
says, every unit is looking only for the items that most relate to their own
operations. “The infrastructure team is trying to find assets, the data team is
trying to discover the data, the privacy group is looking for GDPR or CCPA
The challenges of asset
management lead directly to the problems with the cloud, whether it’s an
official cloud environment or one purchased by a renegade unit operating within
the wild west rules of shadow IT.
The biggest cloud
problem discussed by consultants is that many security departments assume – or,
perhaps more bluntly, hope – that the major cloud environments (Amazon, Google,
Microsoft, etc.) offer far more security for client tenants than they actually
Often, it’s not that
cloud environments are necessarily less secure than the security at the
enterprise tenant’s operations – although that is certainly sometimes the case
– but it’s that cloud environments are working with a massive number of tenants
from Fortune 1000 companies and the cloud vendor does the best it can offering
a decent vanilla environment for all. But it can’t tailor its environment for
the compliance and security needs of different tenant companies and it
therefore assumes that every client tenant will perform their own extensive
customization based on that enterprise’s security, compliance, vertical and
“Cloud workflows are not inherently secure,”
says Gartner’s Olyaei, who drew the comparison of a cloud tenant with a
physical tenant in an apartment building. He argues that an apartment tenant
has to make sure that the apartment doors and windows are locked and that no
stranger is granted access, but that apartment tenant can’t control the CCTV in
the lobby or make sure that the doorman and security guards stay awake and do
their jobs. “The CISOs believe that it’s the (cloud service provider’s)
responsibility to protect their data and apps” even though that is not the
cloud company’s job, he says.
Olyaei points to
Capital One’s massive cloud breach as an example of an enterprise trusting the
cloud provider too much on security matters. “The fact that the
misconfiguration happened is the customer’s responsibility,” he says.
agrees. “In the adoption of cloud, the notion of outsourcing security
responsibilities to the cloud vendor” remains, he says. Some CISOs “are
learning the hard way that you’re not obviating the responsibility for security”
when a cloud vendor is retained.
Morrison said some of
his clients have suffered because of how a cloud vendor handled some tracking.
“Things like logging and how far back they retain logs. We’ve seen a major
(enterprise) have an attack and want to do the standard forensics and the data
just wasn’t there because the cloud provider – courtesy of the contract – only
retained that log data for 30 days,” Morrison says.
mentioned complaint about what mistakes Fortune 1000 CISOs make today involves
the fear to change. All humans fear change – and yes, despite what some CFOs
believe, all CISOs are technically human – but analysts argue that this
resistance to security change is proving quite counter-productive.
Shawn Fohs, the
managing director of the U.S. forensic, privacy and cyber response for
consulting firm Ernst & Young (EY), points to the general CISO resistance
to, for example, AI’s machine learning (ML) for security, despite the fact that
many of these same enterprises are aggressively using ML in other departments
for research, product development or marketing analysis.
“A lot of it is a
confidence issue,” he says. CISOs “have to become more aware of potential risks
and how these risks are evolving. Cybersecurity professionals are hesitant to
change and it’s a mistake that they are this hesitant to change.”
Much of this hesitancy
to change is evidenced in purchasing patterns, Fohs said. “CISOs tend to get
very focused on that we need this tool to solve this problem. They get too
fixated on a specific tool and they don’t embrace all of the different tools
that are available to them. They get vendor fatigue and they stick with what they
know” rather than “adapting to a new mindset. They go to all of these
conferences and they meet with all these vendors, but instead of actually
engaging or embracing (the new tools), they just go back to tried and true.
They know what has historically worked for them.” He cited continuous
authentication and behavioral analyts as other examples of promising
technologies that are meeting with a lot of CISO resistance.
Forrester Research Vice President and Principal Analyst
Jeff Pollard agreed with Fohs and says that many “CISOs have created that
situation for themselves by not being involved in innovation processes. It’s a
lack of flexibility and treating everything the same. Security hasn’t been very
willing to customize.” He wants to see more CISOs not only embracing DevSecOps,
but taking the DevSecOps approach and applying it to improving relations with
lines of business (LOB) by embedding security people within those groups.
Not only would that
kind of cooperation bring security into product development far earlier, but
team members would also bring back much more sophisticated understandings about
the goals and priorities of those LOBs so that security wouldn’t merely
improve, but could also become far more responsive to those business units and
might even be able to accurately anticipate their needs.
PwC’s Nocera also sees this complacency and lack of growth to be a key CISO mistake. CISOs “are not adapting their security model to be as agile as it could be. They are not being on the front-end of innovation. Many of these chiefs have grown up being excellent technologists, (wonderful) at responding to incidents. Their comfort zone is fighting fires,” Nocera says.
Sometimes, good guys become bad guys with a little math error
In security circles, the typical discussion about
mistakes is when CISOs/CSOs/Security Analysts are accused of making them. But
sometimes it’s the cybercriminals or cyberterrorists who make mistakes. And
even more interestingly, it’s sometimes good guys who make mistakes and accidentally
morph into bad guys.
The recent history of security incidents gives us
two superb examples of hacker mistakes turning those bold hackers into
The first is
Robert Tappen Morris, who gave the industry what is arguably the very first
Internet Worm (now known as the Morris Worm) back in 1988. That worm, which
brought much of the Internet to a crawl or a dead-stop, was an experiment that
graduate student Morris created to make a point about Internet security
failings. Point made.
But his intent had never been to crash the
Internet, had never been to create something that more closely resembled an
early D-DOS attack than a run-of-the-mill worm. That was the result of a math
error that Morris made. While leveraging holes in Finger ID, Sendmail and
rsh/rexec, Morris created the worm to impact a limited number of computers
globally: Just enough to make his point.
Ironically, the error grew out of Morris’ explicit
attempt to make sure that the worm did not grow wildly out of control. To slow
it down, it coded the worm to, in effect, ask every machine it visited whether
the worm was already installed. But Morris knew that admins could simply
program the system to always say “Yes, you’re already here. You can go away
now.” To combat this, Morris programmed the worm to install itself even if the
system said it was already there, roughly one out of seven times.
It turns out that he had intended to have it
install itself either way only once out of seven times, but his math error was
that he should have opted for a far smaller number. That was more than enough
to crash the Internet.
recent example was the Heartbleed virus, from back in 2014. In a wonderful
interview with Heartbleed creator Robin Seggelmann in Autralia’s Sydney Morning
Herald, Seggelmann said “”I was working on improving OpenSSL and submitted
numerous bug fixes and added new features. In one of the new features,
unfortunately, I missed validating a variable containing a length.” After
Seggelmann submitted the code, a reviewer “apparently also didn’t notice the
missing validation, so the error made its way from the development branch into
the released version.” Seggelmann said the error was “quite trivial,” even though
its effect wasn’t. “It was a simple programming error in a new feature, which
unfortunately occurred in a security-relevant area.”
As for the impact, the Herald put it succinctly:
“The bug introduced a flaw into the popular OpenSSL software, which is used by
many popular social networking websites, search engines, banks, and online
shopping sites to keep personal and financial data safe. It allowed those who
knew of its existence to intercept usernames, passwords, credit card details
and various other sensitive information from a website’s server in plain text.
It also allowed for a server’s private encryption keys to be stolen.”
If the path to hell is paved with good intentions, so, too,
it seems, is the path to cybersecurity disasters. At least sometimes.