Why Do Employees Keep Ignoring Workplace Cybersecurity Rules? | #hacking | #cybersecurity | #infosec | #comptia | #pentest | #ransomware

Companies spend a lot of time making sure employees know the rules regarding cybersecurity. They cajole, they beg, they threaten. They make them take classes, sign forms, watch videos.

Companies spend a lot of time making sure employees know the rules regarding cybersecurity. They cajole, they beg, they threaten. They make them take classes, sign forms, watch videos.

And yet, somehow, it does little good.

Hi! You’re reading a premium article

And yet, somehow, it does little good.

A study by Gartner last year found that 69% of employees had bypassed their organization’s security policies in the past 12 months, and 74% said they would be willing to do so if it helped them or their team accomplish a business objective. All this even though most of them probably know that human error is often a factor in cybersecurity breaches.

Such indifference inevitably raises the question: Why? Why do people ignore security guidelines, even in the face of stiff penalties? Why do they flout the rules, even though they know it isn’t good for either their employers or themselves?

The answer may be what criminologists have long called “neutralization techniques”—rationalizations that people instinctively use to “neutralize” the wrongness or harm of an action. Cybersecurity researchers have shown that such techniques also play a big role in employees’ willingness to ignore their employer’s cybersecurity guidelines.

The concept of neutralization was developed by American criminologists Gresham Sykes and David Matza in the 1950s to explain the ability of juvenile offenders to “neutralize” guilt associated with misbehavior. They identified several neutralization techniques, and set the groundwork for criminologists to add techniques later on.

Typical rationalizations employees use to flout an employer’s security rules include the following. Many will no doubt sound familiar, but they all have the same thing in common: They allow people to shirk the guilt they would normally feel for violating security guidelines. In this way, employees can maintain their rule-abiding self-image while drifting in and out of compliance.

Denial of injury is when employees convince themselves that no harm will come from ignoring a security policy, so breaking the rule is acceptable. And because it is acceptable, no punishment is deserved.

Appeal to higher loyalties is when employees place the demands of a work project or manager above compliance with a security guideline. They know that ignoring the policy is wrong, but they make loyalty to someone or something an imperative that overrides that.

Denial of responsibility is when employees refuse to take personal responsibility for their actions, rationalizing that the situation is beyond their control. They might claim they weren’t aware of a specific security policy or weren’t given the proper training to implement it.

Metaphor of the ledger is a technique in which employees mentally tally all of the positive things they do, such as working overtime or meeting quotas, and compare those to their occasional negative behaviors. If the positive actions outnumber the negative, they tell themselves they should be able to break a security rule occasionally without feeling guilty.

Defense of necessity is when employees convince themselves they were forced to behave a certain way in a given situation, so it isn’t their fault. For example, they justify downloading unauthorized software from the internet because they need it to meet a tight deadline.

Condemnation of the condemners involves criticizing those who implement and enforce security policies and using that as justification for ignoring the rules. For example, employees might believe that the security team is unreasonable or out of touch with the needs of the business, so they view their policies as invalid and acceptable to ignore.

One would think the threat of sanctions would cause employees to think twice before violating a company’s information-security rules. Unfortunately, sanctions for wrongdoing are precisely what neutralization techniques are so effective at dismissing. If someone convinces themselves that violating a policy isn’t wrong in their circumstance, why would they fear being apprehended or punished?

But if sanctions don’t work, what does?

First, security teams can use training courses to address neutralization techniques head-on by explaining their use and why they are invalid. Since so many of these techniques are second nature, bringing them to the surface makes people examine them in a way they didn’t have to before.

In a study, security training was experimentally given to 87 employees of a large multinational company. Of those employees, 21 received the standard security training and 66 received training that also addressed specific neutralization techniques.

For example, the trainer described how people commonly use a “defense of necessity” to justify choosing weak passwords, believing that strong passwords are too onerous to use. Then the trainer discussed why this notion isn’t necessarily true and demonstrated practical ways to choose passwords that are both strong and usable. In another example, the trainer explained how people use the “denial of injury” technique to rationalize no harm is done in using a weak password. The trainer then showed why this view is false by demonstrating how easily hackers can guess weak passwords and the damage that can be done with this access.

Employees in this latter group reported substantially higher intention to comply with the security policy in the future and lower agreement with neutralization techniques compared with those in the control group. These differences held three weeks later in a follow-up survey.

Second, if holding a training course isn’t practical or feasible, organizations can call out neutralization techniques in messages sent to employees. In one study, 200 working professionals were presented with hypothetical scenarios in which an individual used either a “defense of necessity” or “denial of injury” rationalization to violate a security policy. They were then asked the likelihood that they would behave the same way in a similar situation. Half of the scenarios included a message that directly undermined the specific neutralization techniques used, such as: “Even though people believe that sharing passwords can be justified under certain circumstances without any real consequences, adherence to this policy is important; sharing of passwords should not be justified for any reason.”

The results showed that those who received messages that directly undermined specific neutralization techniques reported far lower likelihood that they would violate the security policy if in the same situation in the future. This suggests that simply being aware of the fallacies underlying neutralization techniques can help to reduce their use.

The key takeaway from these studies is that organizations need to understand that neutralization comes naturally to people, and no amount of threatening and insisting will change that. But management can help employees to recognize and reject rationalizations, and see cybersecurity policies for the essential role they play.

Anthony Vance is the Lenz Professor and Commonwealth Cyber Initiative Fellow in the department of Business Information Technology at Virginia Tech’s Pamplin College of Business. Zeynep Sahin is a Ph.D. student at Virginia Tech specializing in cybersecurity workforce issues. They can be reached at [email protected].


Click Here For The Original Source.

National Cyber Security