Author - ComstarISA | Jul 20 2018 06:00
What are cognitive biases? Well, the appropriate definition is provided to us by the Interaction Design Foundation as “an umbrella term that refers to the systematic ways in which the context and framing of information influence individuals’ judgment and decision-making.” Basically, it refers to those rational errors in the logical thinking which are by design specific and well-defined. They tend to be associated chiefly with thoughts and thus are the sole reason to corrupt the sensitive data of the enterprise.
In addition, these errors are also the cause of most of the failure of the employees in social engineering tests. Furthermore, social engineering is nothing more than a systematic exploitation of human cognitive biases. For instance, hackers and phishing attackers are well-aware of the methodology of employing these errors to slyly persuade recipients to voluntarily open links that they wouldn’t click if their actions were based on perfect logic.
Moreover, most of the incident response and data security cases; the responder fails to approach the problem systematically and with reason. Instead, they usually allow malware threatens to corrupt the process, therefore; creating expensive cost and consuming too much time along with producing potential perilous situations.
Besides, it is evident that these logical errors can potentially become a security risk when it refers to composing and interpreting technical documentation related to software or hardware features. Indeed, composers or authors of this types of technical documentation must become extremely familiar with the issues, technologies, processes, and methods they are considering to compose.
On the contrary, these factors are top most priority and descriptions either be prominent or omit contextual cues for readers who have a different set of ideas in mind or are less familiar with the issues at hand. In more simple words, it seeks to underscore that the writer may become a source of confusion for the readers through unexplained contexts.
Furthermore, in accordance with the 2018 RSA Survey of 155 IT professionals at the RSA Conference held in May implies that 26 percent of companies ignore security bugs because they believe they don’t have time to fix them. The problem, however, is dealing with the consequences of unfixed bugs tends to take longer than it would’ve taken to implement the initial fix in the first place. This could be the result of a cognitive bias called hyperbolic discounting, where choices that benefit the present self-are given priority over those that benefit the future self.
However, in this context, the benefits of ignoring a bug now are given more weight than the cost of dealing with the problem later. In addition, this survey also revealed that IT professionals deliberately ignore security holes for other reasons, including a lack of knowledge about how to proceed. This choice could be driven by the ambiguity effect cognitive bias, where a lack of information informs a decision. Because the path to troubleshooting a problem is unclear, that path is rejected. Finally, less than half of the organizations surveyed said they patch vulnerabilities as soon as they’re known. Eight percent of respondents even reported that they apply patches just once or twice per year.
In fact, awareness about these security flaws must be a core part of every security training sessions. The first step toward overcoming these logical errors is for everyone to understand that they exist, they’re pervasive and they have a negative impact on data security. They are also the reason for best practices, which embody institutional learning and lessons that reduce reliance on individual thought processes. Most importantly, security professionals must overcome the biases that enable biases. At many organizations, security specialists fail to understand the perspective of less technical users. Hence, we can deduce that this lack of understanding of these cognitive biases are considered as the curse of knowledge and it can consequently result in fake assumptions and poor communication respectively.