Is Fear a Good Idea? Cyber Security Behavioural Design for Staff

Behavioural design for cyber security is an important part of every company's digital strategy. Usually, there are two types of social marketing policies, which companies develop: internal (mostly addressing staff risks) and external (primarily targeting customers). Internally, the development of cyber security culture is a non-trivial task. Ideally, a Chief Cyber Security Officer wants to encourage every employee to become business's "eyes and ears" in cyber spaces. The problem is - it is difficult to achieve. Hence, more often than not companies invest in technological solutions and either ignore staff cyber education completely or apply inadequate training mechanisms, which often have negative effects on employees' enthusiasm as well as morale.

So, what are the most common mistakes in behavioural design for cyber security in staff training?

Technical compliance emphasis: Staff compliance with cybersecurity policies of the company is probably one of the most common way in which cybersecurity human-related issues are addressed within businesses. Many companies have a set of rules with regard to their computer and data systems. Prior to COVID, some companies were very relaxed about taking data outside the company premises allowing employees to work from home, while others were extremely cautious about this requiring staff to use dedicated internal drives or internal clouds for all operations with data. In the current environment with many countries being at a state of a quarantine, work from home became commonplace. Yet, there is still significant heterogeneity in company cyber security policies: whereas some businesses allow free unencrypted communication ( sometimes even usage of private emails), others heavily restrict the way in which communication happens and even insist on encrypting all externally-faced messages. Usually, this depends on the company's sector and size. Large corporates tend to be more restrictive and small start-ups tend to be more laissez-faire. But restriction often do not mean more security as (with rare exceptions) employees tend to bend the rules due to the fact that compliance messages have very technical and rather disengaged manner. As a result, if you ask a random sample of people from a particular business (with an exception of cyber security technology providers, of course) to reveal their opinion about cyber security in their company - the overwhelming majority will say that cybersecurity is a "dry", "boring", and "technical" subject. Much like airplane safety instructions, people tend to skip compliance messages simply because they have a psychological barrier: if 99 of 100 times what you read, watched, or heard about cyber security was boring, on 100th time you will not believe that it can be exciting.

Negative reinforcement: Companies tend to use negative rather than positive reinforcement in cyber security training. Over the course of several years, I have interviewed hundreds of companies about cyber security training and almost all of them use stick rather than a carrot in cyber security eduction. Staff members are punished by losing Internet connection functionality on their corporate computers or fined for failing to spot "bogus" threats or attacks designed by the IT. In some cases, you can even get fired for "re-offending" - for example, I have recently talked to a company, where employees are fired after "3 strikes" (clicking on the "wrong" link 3 times). Such policies lead to negative results: people become overly suspicious and panicking over simple emails and fail to constructively address potential problems because they do not know how they will be viewed by the management. Such measures also tend to create an erosion of trust. Many corporate employees I have talked to about cybersecurity issues told me that they were unlikely to inform their IT department of a potential problem (e.g., if they clicked on a link in a malicious email, for example) simply because they thought this might have negative impact on their promotion. Result of such policies is often highly undesirable. People (and, mostly very good people) simply leave: in the course of my research I once met a person, who became so paranoid about clicking on anything that comes by email (links, attachments, etc.) that she quit her job at a large corporation. Now she works for a start-up, where culture around cyber security is a lot more positive. Obviously, by introducing such measures organisations often underestimate potential consequences of negative reinforcement as their goal is to subject their staff to potential threats and help them gain experience in recognising risks. Yet, it looks like instead of gaining experience, employees tend to become anxious about cyber issues when negative reinforcement is used. It would seem that positive reinforcement (bonuses, praise, etc.) would work a lot better than punishment when staff training is concerned as it would allow organisations to encourage staff to participate in cybersecurity tasks by avoiding unnecessary blame and negative feelings.

Contradictory protocols: Another common trend is the tendency to develop contradictory protocols, which invite much rule misinterpretation from the employees. For example, I know several corporate companies, with the strictest internal cyber security protocols, yet, they have completely counter-intuitive policies. On the one hand, it is impossible to make one click without the cyber security department and IT knowing. Yet, on the other hand, many teams are actively using completely unsafe means of communication, such as, for example, WhatsApp. It is clear that WhatsApp is not a secure app as there were a number of cyber security incidents with it. Nevertheless, communication and a fair share of confidential information floats through WhatsApp daily. This creates issues for behavioural design of cyber security as employees get the wrong idea about what is secure and what is not.

Information overflow: Overloading people with cyber security information is another major problem for many organizations. It seems that many businesses believe that the more information they provide to their staff (and customers!), the less human-related risk their cybersecurity system will face. Unfortunately, systematic measurements reveal that too much information about cybersecurity leads to the completely opposite result. People who are overwhelmed with cybersecurity information become more risk taking in cyberspace. This might be because constant reminder of potential risks makes them overconfident or overoptimistic in this space, leading to a situation when people fail to spot rather obvious threats. The concept of overconfidence was introduced by Ola Svenson, a psychophysicist from Sweden, who conducted research on road safety in 1980s. Svenson noticed, that the overwhelming majority of car drivers in his survey sample believed that their diving ability was above average; while, statistically, this could only be true for ½ of his sample. He concluded, therefore, that people tended to exhibit overconfidence over their relative ability and underestimated abilities of others. We seem to observe a similar phenomenon in cybersecurity: people who receive large amounts of information about cyber risks and cyber defence and who operate in highly regulated environments, tend to significantly overestimate their ability to detect and avoid cybersecurity threats. This, in turn, leads to risk taking rather than risk averse behaviour in cyberspace. As a result, staff members may engage in highly risky activities online due to misinterpretation of services they get from 3rd party app suppliers.

Do as I say not as I do: Ironically, the very people who are supposed to lead the cyber security agenda often turn out to be the largest offenders. Last year I interviewed a CEO of a large corporation on cyber security. Over the course of almost 2 hours, this CEO was telling me how important cyber security is for their organisation and the entire time I was interviewing this CEO, I could see a piece of paper attached to their desk lamp with what turned out to be their email password... Naturally, what hope do employees have if the CEO does not show them a good example?

Wrong KPIs and measurements: There is a general crisis with regard to measures of success and setting up good KPIs for cyber security. In most cases, success in cyber security is measured by the number of technological innovations or new software patches; number of trainings implemented, etc. In all my cyber security research experience I only once met a company that actually measured the effectiveness of own cyber security policies by considering how many attacks were prevented by new technology the purchased and by the staff they trained.

The Biggest Mistake of All

Yet, perhaps, the biggest mistake of all is not the fact that we tend to punish people for making mistakes instead of rewarding them for timely threat reporting; that we fails to show them a good example; and not even the fact that we overload our employees with technical and boring information. The biggest problem is that whatever message we construct about cyber security, it usually has the framework of fear. The usual messages we see about cyber security are negative and target only one emotion: they are trying to scare. Think of the latest piece of information you have received about cyber security. Was it framed in positive way? I am willing to bet that it was not. You were probably told that "95-99% of all cyber breaches begin with a phishing email", "billions of dollars, euros, pounds, and whatever other currency are spent on recovering from cyber security attacks", etc. You were also probably told nothing about how risky various cyber security activities are. Do you know, for example, what is the propensity of your identity information being stolen? How about the chance of someone stealing your email password dependent on how sophisticated your password is? I am guessing you do not know...

And this is the problem. In order to act, we need to understand. Yet, it is very easy to construct a campaign targeted at scaring people with some cyber threats and it is very difficult to carefully explain to them the risks to inspire long-term behavioural change. We do not know how bad the situation is and if it is bad in the first place. The outcome is very unfortunate: cyber security campaigns (because they are based on fear) have only temporary effects. The human psychology is such that we get used to things. If someone is going to try to scare you with the same thing over and over again, you will be scared, but you will be scared for a limited amount of time. At one point, you will simply forget that you are scared or you will have an experience, which will contradict the fear message and you will ultimately lose your focus. Think about it: if I tell you a horror story about clicking on email links, you will be scared of clicking on links for a while. Yet, one day you will click on a link (or see someone else do it) and nothing bad will happen; after which point you will be clicking on all links you see until either something bad happens or (most probably) until you are told another horror story (the consequence of the latter is that after the second horror story you will be scared for a shorter period of time). I hope you will agree with me that (a) this is hardly constructive and (b) it certainly does not make you or your organisation more secure.


Cyber security behavioural design in businesses has so far failed to achieve desirable results. One of the major problems - the concentration on negatively frames fear messages used in cyber security education and marketing campaigns. To tackle this problem, we need to work towards the development of positive cyber security culture within organisations, where KPIs are set correctly and risks are carefully articulated and explained to staff, who, in turn, become more empowered and enthusiastic about detecting new threats as well as preventing cyber attacks.