Militaries and Industry Seek to Solve Cybersecurity Conundrum

For at least the whole of the current century, militaries have understood the critical role cyberdefense plays in every aspect of operations. Yet most military organizations appear reluctant to train for network defense outside of specialist cyber units.

Unlike with land, sea, air and space, cyberwarfare cannot be conducted only by specialists. Mistakes in configuration or operation of any device connected to a military network could allow an adversary to gain access. The whole force has to be trained in cyberdefense—yet this wider training, for the most part, does not take place. The result is widespread vulnerability in military systems.

In the 2017 edition of his annual report, J. Michael Gilmore, the Pentagon’s then director of operational test and evaluation (DOT&E), painted a bleak picture of the U.S. military’s cyber-readiness. “Red Teams emulating a moderate-level adversary—or below—routinely demonstrate the ability to intrude [Defense Department] networks and operate undetected within [them] for extended periods of time,” he wrote.

These problems derive, in part, from a reluctance to allow a cyberelement to be included in major exercises. “Exercise and network authorities seldom allow fully representative cyberattacks and complete assessments of protection, detection and response capabilities,” the DOT&E noted. This reluctance stems from a fear that a successful cyberattack on the first day will bring a two-week exercise to a halt.

Militaries are beginning to develop ways of integrating cyber into joint exercises, but initial results have succeeded mainly in highlighting the scale of the security-skills problem.

ShowNews has learned of one experiment conducted by a military organization during an exercise involving some 200 experienced service personnel in 2015. Trainees were subjected to four simulated “phishing” attacks, in which bogus emails were sent in an attempt to persuade recipients to click on a link contained in them. Anyone clicking on the links was directed to a page telling them that, had this been a real attack, the network could have been compromised.

The first wave of emails purported to be from a bank, urging recipients to revalidate their password. The second appeared to come from the business-oriented social network LinkedIn. The third contained an invitation to attend a talk on the Islamic State group. The fourth notified of a false change of arrangements for lunch during the exercise.

Around 5% of recipients clicked on the link in the banking email, and—as the organizers expected—the numbers caught out by the next two emails were lower. But a worrying 49% were fooled by the lunch email, even though it spelled the name of the exercise incorrectly. Worst of all, a very small number of people—a little under 2% of the training audience—not only clicked on every link, but did so multiple times.

“As one customer told me, there’s no patch for stupidity,” says Yochai Corem, vice president for Europe, the Middle East and Africa at Cyberbit, the cybersecurity division of Israeli defense contractor Elbit. “But even if you can improve awareness by 10% it’s important. I know of an organization where 20 people were targeted by this kind of scam, but two of them phoned their central security office [to report the suspected attack], and that was enough to stop it affecting the rest.”

The temptation to blame the untrainable 2% should be avoided, though, lest it lead to institutional and structural vulnerabilities being ignored. That, at least, is the view of Emma W., lead for people-centered security at the UK’s National Cyber Security Centre. (The NCSC is a division of GCHQ, the country’s communications-intelligence agency, whose staff cannot be identified publicly for security reasons).

“We need to make security that works for people, because that’s the only way that security works,” Emma said during a presentation at the InfoSecurity Europe 2017 event in London earlier this month. “Shape the environment—don’t fix the people.”

Emma argues that security failures are the responsibility not of individuals but of organizations that put processes and policies in place that allow those failures to occur. From requirements to generate, memorize and never write down multiple strong passwords, to policies that often mean someone who reports making a security error will be disciplined, organizations often set up unintended barriers to good security.

“If you really want to call your users the weakest link, I guess I’ll let you,” she says. “But first of all, you have to have invested in technology and systems to make your users’ lives more manageable day to day, and not just pile all the burden on the end-users and expect them to do things that are unrealistic.”


. . . . . . . .

Leave a Reply