Socio-economy & New Tech

Post-Doctoral Fellowships

United Kingdom

Just Culture for Managing Human Aspects of Cyber Risk

Cyber security work, and particularly risk mitigation, has often focussed upon technical advancement, the role of humans being viewed as little more than a hazard, one who through error or mistake (intentional or otherwise) is at fault or to blame for security problems. Whilst this may be true to some extent – the human is an intrinsic component of any system socio-technical or organisational – the culture of blame has led to a misconception that better security comes from better systems, where better smarter technology ideally removes the need for or use by humans. 

In May 2017, the ransomware WannaCry targeted Microsoft Windows XP based computers around the globe. Speculation immediately turned to the role of the user in perhaps having clicked a malicious link, propagating the spread of the worm. In the United Kingdom the National Health Service (NHS) was crippled, surgical operations were cancelled, and the attack was estimated to have cost in excess of £92M. In the days and weeks after WannaCry surfaced, much attention was paid to whom was to blame for the attack. The list of potentially blameworthy parties was long and the need to find a cause, or person/party, responsible is somewhat fundamental to human nature, as to not do so implies a loss of control and is distressing. The reality is they all played a role. But the initial race to assign blame did nothing to help understand and dissect the problem, nor to ensure it could not happen again. The reason being that when blame is applied to error, especially very public potentially life-threatening error, all parties are keen to separate themselves from being held accountable. And when the users, owners, developers and maintainers of a system all actively seek to avoid what might be a punitive situation, the facts of what and how something occurred become lost in obfuscation. 

Being able to identify, recognise and learn from those errors and risks is fundamental in designing better, less risky, more resilient systems. However, the very viewpoint of seeing humans as merely users, and worse still the ones to blame for those errors, creates an obstacle to gaining this perspective. 

Objective 

This fellowship is designed to capture this perspective, by furthering development of the new Security Ergonomics paradigm, proposed as being a key requirement for designing and implementing security in complex socio-technical systems. By developing novel methodology, based upon work from the Human Factors / Ergonomics (HF/E) field, this fellowship will provide a transformative understanding for managing human aspects of cyber risk – providing empirical grounding on if, how and why humans, when empowered, can actually be security heroes more than the colloquial weakest-link by overcoming barriers to understanding cyber risk. 

Just Culture as Methodology 

The failings of blame culture in understanding incidents have long been understood in safety-aware domains such as aviation and healthcare. To counter this, Just Culture provides a culture of trust, learning and accountability,creating a safe environment, after an incident has occurred, within which interactions between the components of socio-technical systems, including organisations, can be understood. Further, Just Culture draws a line between blameless and blameworthy actions and allows for an iterative and continuous system improvement towards removing active failure (human error). Rather than merely applying new technology to the problem or engineering the human-in-the-loop out, Just Culture provides method for gathering foundational knowledge of how human error has occurred – it is about understanding the risk to the system based upon prior events. 

By way of an example; current research looking to understand the cyber security risks within UK universities has highlighted that of those interviewed all have policies around the use of IT (often referred to as “acceptable usage policy”) and how disciplinary action (blame) may result in the event of transgression (active failure / human error). However, no university has actually expressed any formal distinction between deliberate and inadvertent human error. Nor does there appear to be any consistency in how such policies are applied with, in general, a culture of overall leniency towards all cyber security related error. 

This would be consistent with prior work, looking at security cultures, finding that “organisations fail to work in a coherent manner” and tend to a culture-based approach to organisational behaviour and work. This cultural leniency shares similarities with aspects of Just Culture, and this less formal approach to handling cyber events may well be evident in other organisations. However, with no current academic research into the use of Just Culture for cyber security, it is necessary to first understand the extent of blame cultures, and secondly assist in shifting organisational culture towards that position of being able to understand cyber risk. 

Impact 

The empirical insights into how, by using Just Culture, organisations can gain a clearer view on when, how and why cyber events occur will be unique in the field of cyber security and cyber risk management. Moreover, this work provides a paradigm shift in how humans are viewed—as an integral component of the system rather than merely a user–and hence how they are engaged in understanding and managing risk within complex socio-technical systems. 

Barnaby
CRAGGS

Institution

University of Bristol

Country

United Kingdom

Nationality