In recent years many organisations came to the conclusion that the detection of new cybersecurity risks heavily relies on the ability of humans within these organisations to recognise, identify, and flag those risks. While technical capabilities and automation (such as e.g., the so-called "zero-trust" solutions) may help us to deal with some problems and allow us to "patch" some security gaps, such systems can only deal with threats, which (a) have been identified before and (b) have some historical data (as previous data allows us to build automated solutions to deal with known/anticipated risks). The problem becomes a lot more tricky when we think about risks and threats, which we have not encountered before (this also concerns fusion attacks, which creatively combine many known threats). Such new risks and threats can effectively be identified by only one cybersecurity sensor in the world - a human being. Because the paradox is that even though we as humans are very vulnerable to cybersecurity threats, we also are capable of detecting the threats much more effectively than existing automated systems.
Human as a (cyber)Security Sensor (HaaSS)
In 2016, a group of scientists from the University if Greenwich - Ryan Heartfield, George Loukas, and Diane Gan - came us with a brilliant idea. They set off on a quest to understand whether and to what extent people are more or less susceptible to various types of cyber attacks. Instead of trying to elicit people's propensity to engage in various risky activities, they developed an ingenious test - they classified all social engineering cyberattacks into 24 major categories and then designed a task, where each screen you see (as a test subject) may or may not be the beginning of a cyber attack. They concentrated on attacks, which aim to breach security by means of user deception - i.e., through either (i) making people do something they do not want to do or (ii) making people NOT do something they are supposed to do. These attacks included but were not limited to the following list:
Source: Heartfield, R., & Loukas, G. (2018). Detecting semantic social engineering attacks with the weakest link: Implementation and empirical evaluation of a human-as-a-security-sensor framework. Computers & Security, 76, 101-127., Table 1, p.
Researchers then asked participants from around the globe to complete the test, where each participant was shown a screenshot which could or could not represent the start of a potential attack and asked to guess whether the screen they are looking at is indeed the start of an attack. The results were impressive. Despite the widespread view that humans are really bad at recognising the cyberattacks, on average, 74% of all participants (who were not cybersecurity professionals) were able to accurately tell "dodgy" screens from benign. Specifically, they found that in the US (1863 participants), UK (454 participants), Canada (293 participants), Germany (207 participants), and Australia (161 participants) people got 74% of the social engineering threats correctly recognized. In the Netherlands (107 participants) this number is as high as 77% and in Brasil (138 participants) - 72%. In other countries (1234 participants) , 76% of threats were recognized correctly.
Are People better at Detecting Cyber Risks than Machines?
Yes! Ryan Heartfield and George Loukas ran further tests using Cogni-Sense - "a Microsoft Windows prototype application, designed to allow and encourage users to actively detect and report semantic social engineering attacks against them." Over a period of 45 days they subjected study participants to various sophisticated social engineering tests and found that people consistently outperformed technical security systems. So what do these results tell us? People are not as bad at detecting social engineering attacks as we originally thought. So why are we still making all these mistakes, opening phishing emails and clicking on dodgy links? Well, one reason is that the test puts you in the right mindset in a sense that during the test people are deliberately looking for potential threats, whereas under "normal" conditions we lose our focus and start making mistakes.
But a more interesting result from all these studies is that some people (Yes, they DO exist) get all the test tasks right 100% of the time. Naturally, the question is what is it about these people that make them different from the rest? And if we can single out these essential differences - can we teach others how to do it? In a sense one can imagine two scenarios in an organizational or business domain. In one scenario, you would recruit a whole department of these top performing individuals, who will be processing all suspicious cyber events and detecting threats. In another scenario, you would be able to teach everyone in the organization to be more effective in threat detection.
While we still do not know the difference between the top performers (apart from the fact that they are usually not cybersecurity professionals) and the rest, future research will be able to answer this question. I certainly hope that these "skills" or "intuitions" can be learned or developed in all employees rather than selected few.
Meanwhile we are stuck with trying to recruit the best cybersecurity talent and educate others to be aware of the potential risks. And in this task, the "right" motivation is key. Hackers are not concerned about organizational department boundaries or what type of operating systems they prefer. They will do what they want and work with who they want. On the company’s side, even if you have the right level of skills, you need to ensure your employees are correctly motivated, and they care. This is one of the key issues in cybersecurity recruitment. Traditional recruiting may identify motivations, but it may not translate into job performance later. It can be beneficial to get to know potential new recruits before they become part of the organization. The use of recruitment fares, internships, and pre-screening are often useful in this regard.
Even after recruitment, the characteristics of personal resilience and perseverance may not become evident until a few weeks later when, in the middle of an attack, they are able to think clearly under stress and pursue actions to try to respond to these threats. These are the real tests. Such people are key to cybersecurity, but they must also be able to work as team players to co-ordinate actions as a group. Creativity as well technical skill, a healthy hunger to learn and motivation are also key. Having different experiences and qualifications in the cybersecurity group also helps enrich the team dynamics for creativity and thought-pattern approaches. This is why we often talk about the difficulties of finding the right people for the cybersecurity units as ultimately it is those of them who care and who would stay after hours to help you deal with the multitude of threats are the ones who make the most difference.
Human cyber security sensors remain an underexposed area for most organizations. Yet, the potential of such human sensors are limitless. After all, there ARE people who detect threats correctly 100% of the time...