What’s your most commonly used password? Do you use the same password for a few online accounts? Or one of these highly insecure passwords?

It’s difficult to find exact statistics on the amount of users who have bad password habits, but it’s a pretty safe bet to say that the majority (or at least way too many) of us use bad passwords. Countless hacks have exposed poor practice, the most recent being the Adobe security hack which has affected at least 150 million users. Though this was in part due to Adobe’s poor practice it exposed just how many users still have very insecure passwords. For example, the password 123456 was used by nearly 2 million users.

And it’s not just home users or individuals who aren’t using passwords well. People are neglecting security in work environments, even if their workplace handles the most sensitive data. A new report by the US government has revealed the extent of security compromises that have affected an array of departments within the government.

As the report states “In the past few years, we have seen significant breaches in cybersecurity which could affect critical U.S. infrastructure.” It’s clear that on every level, from the home user right up to the most powerful organisations in the world, we aren’t practicing safe cyber-security.

Why Don’t We Practice Safe Cyber Security?

In part, lack of good security is to do with ignorance. As individual users we don’t necessarily understand where we are leaving doors open or the extent of the risk. For more powerful organisations, the difficulty can be attracting the very best personnel to protect systems which are likely to be targeted by hackers because of the prized data they hold and the status of the organisations (hacking the US government is naturally a golden feather in the cap).

But there are those amongst us who are aware of the risks, at least in a peripheral sense, who still don’t do what we are told. And why are the US government not making more of an effort to ensure they get the right people through the door since cyber-security should be an absolute priority?

What we need to account for is the human element in all of this. There’s some kind of disconnect between intellectually knowing the risks of poor security and doing anything about it. What kind of psychology is going on?

The Psychology Of Bad Cyber Security

A research paper published in Communications of the ACM which looks at how computer users assess risk covers the following psychological events happening when a user approaches security.

  1.  Users do not think they are at risk

People are innately biased towards themselves. They tend to believe they are less vulnerable to risk than others and they also believe they are less likely to be harmed by consumer products from well-known brands compared to others. A user starts out irrationally thinking they are at less risk than everyone else.

  1.  Users are unmotivated

Users want to conserve mental resources and are looking for quick decisions based on learned rules and experience. It might not always have perfect results, but this process is highly efficient in that it minimises effort and tends to give a good enough outcome the majority of the time. Hence why we never read terms and conditions when signing up to things online, or we accept cookie policies within milliseconds – it’s never been a problem in the past, why would it be a problem now?

  1.  Security is invisible and unrewarded

Making the right security decision doesn’t gain you a tangible, concrete reward and that makes it harder for users to understand. Abstract outcomes are less persuasive than concrete outcomes. Since users are looking for the quick decision (see point two), they’re less likely to take the time to fully consider the abstract result of not having something happen, than the speed of getting to the end of their primary task quicker.

Choosing the secure option is also unrewarded. You don’t get to see how that good decision is benefiting you in the immediate aftermath and there are no rewards outside of the system which motivate users. For example, an insurance company could offer lower rates to those businesses who can prove tight security protocols and systems. The actual process of clicking the secure option could also be gamified so that a user can see how they’ve ramped up their security with the decision they have made.

  1.  Users will gamble for a loss

Ryan West looks at research conducted by Tversky and Kahneman which demonstrates that people are more likely to gamble for a loss than take a guaranteed loss. Take these two scenarios first of all:

  • Gain £5 at no risk
  • Gain £10 if a coin toss lands heads up

72% chose the sure option rather than taking a risk. Now consider this scenario:

  • Lose £5 at no risk
  • Lose £10 if a coin toss lands heads up

What would you do? 64% of respondents would take the risk of losing more to chance the outcome of not losing anything, than accept the smaller, guaranteed loss. Apply this to security and the guaranteed loss could be spending the extra time putting the security into place, researching it, investing in the right security personnel at a cost, compared with the potential loss of a security breach which in the decision-makers head might not happen.

  1.  Security is a secondary task

For individual users, security questions tend to pop up whilst in the middle of doing a task. You’re often halted by a question which indicates consequences which you don’t really understand and you’re asked if you want to go ahead. In this situation a user is likely to rush through, click yes because they want to get on with their task and may leave themselves at risk.

What Can Be Done?

It’s clear that we have a lot of psychological barriers standing in the way of secure use of technology. In order to combat this we need to start rethinking how we present security decisions, both in real-time to a user who has to click yes or no, or read a cookies policy, to decision-makers within an organisation who need to invest more into improving security. A combination of education and clear, accessible information and some kind of gamification of the process would be a good a place to start.