The risk of cyber-attacks to UK companies is bigger than ever. With 90% of cyber breaches involving phishing techniques, it is increasingly important for organisations to identify ways to increase awareness of phishing attacks (NCSC, 2016) whilst maintaining positive relationships with employees (Kirlappos & Sasse, 2015). Not only is it important for organisations to know who is susceptible to what kinds of phishing attacks, but they also need to prevent such incidents from occurring. For this reason, a number of organisations conduct simulated phishing exercises, in which employees are sent emails that simulate phishing attempts. Organisations can use simulated phishing to test which employees are susceptible to what kinds of phishing attacks, provide instant feedback and timely, “just-in-time” training when links are clicked, and form the basis of repercussions for individuals who click phishing links.
Raising awareness might not be sufficient to successfully protect an organisation, especially if such exercises carry any unintended, negative outcomes. For instance, maintaining trust between employees and organisations is a vital component of compliance with security policies (Kirlappos & Sasse, 2015). Simulated phishing exercises have been argued to undermine this trust (Murdoch & Sasse, 2017), and create a hostile environment, whereby employees are blamed or actively punished for slip-ups, ultimately reducing long-term reporting (Murdoch & Sasse, 2017; NCSC, 2018). However, some of these assertions have not been directly tested, and do not account for the different ways in which simulating phishing could be implemented (e.g. to provide feedback, training or punishment to those falling victim).
In SPEC, we sought to address these research gaps through two studies. In study 1, we aimed to explore how organisations use simulated phishing and their use of “carrots” and “sticks” in their cyber security campaigns through a cross-sectional survey with awareness professionals. We found that organisations varied in their usage but found that sanctions are used in over 90% of organisations. We found that organisations took a stepped-response to repeat clickers in simulated phishing exercises, providing further intervention (such as further training or sanctions) depending upon susceptibility.
In study 2, we aimed to explore the impact of three different interventions: forced just-in-time (JIT) training (in which participants were forced to do a mandatory training course), lean JIT (a brief training message) and punishment (loss of performance payment) on anti-phishing detection and task performance. We also investigated the secondary impact on mental workload, perceived fairness, state anxiety and task autonomy. Through an experimental assessment, we found that all interventions significantly decreased phishing susceptibility. However, forced JIT negatively impacted task performance, fairness, state anxiety and mental workload. Punishment was also found to negatively impact fairness, state anxiety, and mental workload. These findings highlight that forced JIT and punishment should only be implemented in organisations with caution due to their negative consequences on employee wellbeing. Instead, the study encourages the use of lean JIT to decrease phishing susceptibility without negatively impacting mood, productivity or perceived fairness.
Overall, both studies highlight the need for greater consideration of behaviour change techniques (such as training and punishment) and their potential unintended consequences in organisations. By taking a “people-centric” approach to cyber security, one that sees people as a solution will be essential to building cyber resilience against phishing.
As part of CREST’s commitment to open access research, this text is available under a Creative Commons BY-NC-SA 4.0 licence. Please refer to our Copyright page for full details.
IMAGE CREDITS: Copyright ©2022 R. Stevens / CREST (CC BY-SA 4.0)