“Only amateurs attack machines. Professionals target people.” —Bruce Schneier, one of the fathers of modern cryptography.
Any kind of security that is meaningful and useful to people, involves people, their interactions with the environment, their relationships with others, and their myriad thoughts and biases. Various social theories have tried to model people’s decisions as if they are computers or can be captured into some mathematical equations. However, they have not been able to bracket human behaviour and his actions. Fact is that people are not deterministic. They have limited cognition and their behaviour/action is also a function of context and their inherent biases. At times, this leads them to act in ways that appear to be in conflict with their intentions, whilst the repercussions, though being aware of, at that moment in time are ignored. Taking a behavioural approach to security means focusing less on how people should act, or how we expect them to act, but more on how they actually act.
More than 90% of the businesses that have experienced data breaches affecting their public cloud infrastructure are due to human factors. These attacks had a significant part of ‘social engineering’ inbuilt. Social engineering is the art of exploiting human psychology, rather than technical hacking techniques, to gain access to buildings, systems or data. The term was popularised by Kevin Mitnik, a famous hacker turned security researcher, in the 1990s. “If you think technology can solve your security problems, then you don’t understand the problems and you don’t understand the technology”. Socially engineered attacks/ threats can circumvent most of the advanced cybersecurity systems as they prey on human behaviour. They push users into performing an action or providing information through psychological manipulation. In the case of email attacks like phishing, these techniques often involves clicking on an embedded link, downloading malware like ransomware or revealing passwords and offering financial authorization.
Why people do what they do?
Social and Behavioural psychology research tells us that people have a tendency to take a particular path when either the environmental constraints are perceived as eliminated or their behaviour is spurt-guided towards that direction. In simple words, people tend to do things when they are easy to do. Social psychologists call these catalysing environmental factors ‘channel factors’ because they have a tendency to ‘channel’ people’s actions.
In a security context, the employees and users behaviour can result in unintentional actions – or lack of action — that can cause, or allow a security breach to take place. While in a structured organisation, former is unlikely, the latter is a manner of concern. Latter can be seen in terms of Psychology of Risky Behaviour. Aspect of employees in order to have an increased productivity can follow the path of least resistance, which can be curtailed by a bigger punishment at stake than the reward. Behavioural and Cognitive psychologists have appealed to constructs such as negative emotions, selfawareness, social exclusion, lack of self-regulation, and pseudo-positive views of risk-takers to implement irrational risky behaviour. Psychologists, other than lack of self-regulation, have added an emphasis on other factors such as impulsivity and sensation-seeking as well. Also they have focused on various cognitive processes that keep people from attending to, or recalling, the right kinds of security information triggers. Other than that, the ever-present aspect of Habituation is very difficult to control. Their decisions can range from downloading a malware-infected attachment to failing to use a strong password. This is part of the reason why it can be so difficult to address the security aspects of human behaviour.
Web of interconnections & data tsunami
The real-world system is a complicated web of interconnections churning out data at an unfathomable rate. The complexity and interplay of systems has reached an inflexion point which is beyond what the human mind can make logical sense off. Security must delve deep into the systems and workflows, addressing its design, components and connections. The modern systems have many components and connections, many of these are not even known by the designers, implementers, or the end users. No system can be perfect and no technology can provide the ultimate answer to the security problems.
Is human error really an error?
Most often in the cybersecurity context we come across the term ‘human error’. It is usually quoted that most of the security breaches are due to human errors. Can anything that happens over and again and cannot be eliminated, actually be termed as an error. Using the term error gives us a false assurance that it can be eliminated. Whereas, it is the human nature which will remain the way it is. We have to model our security solutions incorporating the inherent human handicap of doing things in a non deterministic way, rather than having a false sense of modelling humans as a deterministic machine.
Why training has failed to achieve objective
In a security context the human behaviour/decisions or the susceptibility to fall prey to social engineering has traditionally been viewed as a knowledge gap problem, which can be bridged by training of personnel. However, social engineering is a complex interplay of human psychology, technology and the orchestration which falls under the realms of artistic deception. It exploits the primitive human survival instincts like greed and fear, behavioural tendencies like trust and bonding, and a myriad of other human emotions. Social engineering has multifarious manifestations and may be beyond the human capability to fathom, let alone predict.
Why existing solutions fall short
To combat the risks of security breach, most security solutions rely on user-behaviour monitoring. These are usually rules-based or machinelearning-based solutions that ingest troves of data about employee and user actions, especially their use of IT systems. Generally, they attempt to identify divergence from what is considered “normal” behaviour for a particular employee. When an anomaly is detected, the solution flags the action. While this method can be helpful, we find that it usually falls short, for the following reasons:
• The time by which the negative behaviours get detected, the breach has often already occurred. The organisation is already at a disadvantage, as it cannot stop the attack a-priori.
• The “divergence from normal behaviour” creates a large number of false positives, wasting valuable business working hours.
• Serious threat actors may restrict malicious activity into the baseline of “normal” activity and may never be caught.
• Massive amounts of employee/user data is required for anomaly detection which can create privacy concerns and can have resultant potential for abuse.
The way ahead
The security solutions need to incorporate the human behaviour ab initio and cater for the error margins, in the solutions provided by them. There may be a need for a paradigm shift of approaching the security solution wherein it is seen as a process rather than a product. Following can help in streamlining the process:
We have to acknowledge that human behaviour and actions are non- deterministic and we have to model the security solutions incorporating the same. Modelling humans as deterministic machines is fallacy which will leave a massive gap in the security apparatus.
The businesses have to understand the workflow and automate most of the process. There should be a minimum amount of human intervention. The more the decision points are left with the users the more complex the system will become and ultimately make it more vulnerable.
The software systems and applications needs to be viewed like data, only that much should be allowed for each employee, as is required. Only a restricted number of applications should be allowed which are essential for performing the task. A whitelist of applications depending upon the defined role of employee can help in streamlining the security process.
Subsequently, Operating System can be user configurable, having a built-in AIbased predictive self-learning algorithm, to assess User Tendencies and their Heuristics. This shall modulate the privileges and decide the overall user experience, the user should be allowed to have.
Internal segmentation and the principle of hierarchical privileges should be followed to limit the damage from attacks.
The windows through which the employees can get social-engineered should be identified and the same should be minimised as much as possible by replacing it with automated processes.