About CREST Debates
CREST Debates is a Q&A podcast series where experts go head-to-head on the big questions in security research. Each episode brings together leading thinkers to explore different perspectives on complex topics; from insider threats and misinformation, to technology and trust.
Recorded in front of a live audience, these thought-provoking conversations dig beneath the surface, challenging assumptions and offering fresh insight into issues that shape security, policy, and public understanding.
Whether you're a practitioner, policymaker or curious listener, tune in to hear debates that matter, grounded in evidence, rich in expertise, and designed to spark conversation.
In Pursuit of Insider Risk
Drawing on decades of experience, Professors Paul Martin and Ros Searle discuss the human factors behind security breaches, the importance of trust, and how policies are understood (or ignored) in practice. From AI insiders to moral disengagement, this wide-ranging conversation offers valuable insights for anyone interested in building safer, more resilient organisations.
Episode Highlights
Insider risk is more than cyber: Prof Martin explains why personnel security is the Cinderella of protective security and why insider threats are about people, not just information systems.
Terminology matters: Confusion around terms like personnel security, insider threat, and vetting can obscure the real issues.
Codes of conduct can backfire: Prof Searle shares research showing that formal codes of conduct may increase moral disengagement if not backed by meaningful practice.
Trust is central: Both speakers highlight trust as the universal currency for addressing insider risk and promoting organisational resilience.
AI insiders are emerging: Prof Martin warns that intelligent systems could one day behave like human insider threats.
Voice and silence in organisations: Prof Searle unpacks why people often fail to speak up, and what organisations can do to make reporting safer and more effective.
- 00:00:00Welcome to the CREST Podcast
- 00:01:36Prof Paul Martin's Intro
- 00:14:45Prof Ros Searle's intro
- 00:29:57Question 1: Trust Perceptions: UK vs Italy
- 00:35:04Question 2: Codes of Conduct & Organisational Type
- 00:48:49Question 3: Insider Risk Across Global Cultures
- 00:51:42Question 4: Preventative Measures & Unintended Consequences
- 01:02:56Question 5: Welfare Support: Resources, Mental Health & Trust
- 01:10:47Question 6: What Does Loyalty Look Like?
- 01:18:09Question 7: Reframing Risk: Threat vs Vulnerability
- 01:19:22Question 8: Reframing Risk: Reporting Concerns
- 01:23:57Question 9: Risks of Purpose-Driven Cultures
- 01:25:43Question 10: Insider Risk Across Generations
- 01:32:16Question 11: Security Vetting: Streamlining the Process
- 01:38:46Question 12: AI Insiders: What Are They?
- 01:44:46Closing remarks
Read transcript
Welcome & Introductions
CREST Podcast Intro
00:13 – Introduction by Professor Simon Wells (SW)
Good afternoon, everyone, and welcome. It's a typical drizzly Tuesday here in London, but we're glad you could join us. Today's session is part of what we'll call a CREST Debate, though perhaps more of a debate with a small ‘d’. The goal isn't confrontation. Rather, to explore different perspectives on a complex issue: personnel security, insider risk, and how we can better protect our organisations.
We're fortunate to be joined by two leading experts in this field, Professor Ros Searle from the University of Glasgow, and Professor Paul Martin from Coventry University. Both have spent years researching this topic and they'll be sharing their insights from their work.
The format is simple: each professor will speak in turn about their research and views. We'll start with Professor Paul Martin, followed by Professor Ros Searle. There's no strict time limit. We want to give them the space to explain their thinking and the evidence behind it. Once both have spoken, we'll open the floor for questions from the audience, from myself, or from other CREST colleagues, to help shape a broader discussion around what their findings mean for our day-to-day roles.
We’ve got a diverse group in the room today representing a range of responsibilities and backgrounds. So we're looking forward to a lively and thoughtful exchange. So with that, I'll hand over to Professor Paul Martin to begin.
Prof Paul Martin
01:33 – Professor Paul Martin (PM) introduction:
Okay, so a bit about me. I'll give you the shortest possible version. Originally an academic, a long time ago, behavioural scientist. Then a very long time in the UK national security world. As a practitioner, not as a scientist. One of my roles was as head of what is now NPSA. Previously CPNI. And then I was director of security for the UK Parliament. And I now do a variety of advisory and academic things, including as a Professor of Practice at Coventry University with the new Protective Security Lab, which is based in London and is doing research on insider risk and other aspects of protective security.
So I'm very pleased to be here, having this opportunity to talk about it, and also to hear your thoughts. I want to give just some broad views about insider risk and personnel security in the hope of getting some conversation going.
So insider risk is a particular kind of security risk that requires a particular kind of response. I think it's got some interesting characteristics that don't get enough attention. One of them is that I think it is actually a really interesting subject. It is about humans, human behaviour, human attitudes, and so on. So in comparison to quite a lot of the kind of security world which can be a little bit dry and abstract, it's really interesting. It's amenable to empirical research as well.
Although in my view, personnel security is still largely grounded in the world of custom and practice and supposition rather than empirical evidence. So it's interesting. It's about people. Except, and I'm just going to mention this, we may come back to it, there's also ‘AI insiders’ as an emerging risk. So it's a risk arising from intelligent entities, historically only humans, but increasingly now, not just humans. So ‘AI insiders’.
It's a subject, an area: ‘insider risk’. Insider risk is the problem. Personnel security is allegedly the solution. It's neglected. Sometimes described as the ‘Cinderella of protective security’. We live in a very cyber centric world. Lots of organisations that I've dealt with, and advised, see personnel security as a subset of cyber security. They see insider risk as a kind of information security thing. It's about protecting digital systems, and information, and data. It is that, but it's actually a lot more than that. Every company, organisation of more than about a dozen people has a chief information security officer. The world is full of CISOs. As far as I know, I've not been able to discover a single organisation anywhere on the planet that, so far, has created a chief personnel security officer.
Insider risk is the problem. Personnel security is allegedly the solution
It's misunderstood because the terminology is awful. It's really garbled and confusing. You know: ‘personnel security’, ‘people security’, ‘personal security’. ‘Vetting’, ‘insider risk’, ‘insider threat’. These are kind of terms that do have specific meanings, although there are a number of competing definitions around, to add to the confusion. But they're often used interchangeably, as though they mean the same thing, so there's a lot of kind of unnecessary confusion. Terminology is awful, including, actually even, the term ‘personnel security’. Apart from the endless confusion with ‘personal security’, something else I'm also interested in, which is threats to the safety and security of people in their private lives, like politicians, for example. Very little to do with personnel security. People security is something else, NPSA have got a thing about that. With the advent of AI insiders, calling it personnel security is looking increasingly kind of anachronistic.
It's also a subject that's got a pretty thin empirical base. A lot of what goes on in the name of personnel security is based on custom and practice. A lot of things are assumed to be true. Things like threat indicators; how you spot an insider. There is some evidence out there, but a lot of it is of low quality and there are huge gaps in it. So there's a lot of research that needs to be done.
It's a risk that's widely underestimated. So many organisations I've encountered have this mindset of, ‘yeah, we can see that in theory, this is a problem, but we don't have a problem with it.’ How do you know you don't have a problem? ‘Well, we haven't had any major incidents’. So this is this sort of recurring problem in protective security of people confusing the absence of evidence of a threat with evidence of absence of risk. Just because you haven't had any major disasters with insiders, doesn't mean you don't have an insider risk. In fact, what it probably is saying is that you've got rubbish personnel security and you're unable to detect the risk.
So of course, like all of the interesting security threats, it's covert. The best insiders are covert actors. And if they're acting with the help of a sophisticated external threat actor, like a hostile foreign state intelligence agency or an organised crime group, they'll have really good tradecraft.
So, and, you know, there are lots of case histories, the great spy cases of very high-level insiders, spies who've operated inside very high security organisations like the CIA, the FBI, MI6, MI5, and so on for years, decades, in some cases, without being detected. It's covert. You don't know what you don't know. So there's a systematic bias towards underestimating it. There's a real problem of unknown unknowns in this area.
Another characteristic which is widely neglected, but I think really important, is that it is a systems problem, in a number of ways. First, the most basic way, a scientist would look at this and say, like all the interesting and important problems in the world, what we're seeing is an emergent property of a complex system. Insider risk emerges from organisations and groups of people. It's a complex, adaptive system. So it's an emergent property of a complex system. Systems problems require systems solutions.
It's also a systems problem in that, personnel security has to be a distributed function in an organisation. If you think about the elements that go to make up personnel security. It's everybody in the organisation. It's IT, it's HR, it's legal, it's compliance, it's audit. It's all the managers, it's all the colleagues. It's the whole shebang. It's not something that sits inside a specialist security function. So it's a systems problem.
In practical terms, that means there's no silver bullets. There's an awful lot of snake oil out there. There's a lot of wishful thinking, in organisations, public and private sector, that there's a whizzy bit of software, for example, a piece of technology that will find the insights and solve the problems. You don't have to worry about it. In practice a lot of them don't work. But in principle, that's never going to be an adequate solution. Systems.
Finally, I would just highlight that, personnel security is pretty immature in comparison to cyber security, for example, in various ways. One way is that it's generally very un-strategic. If you ask an organisation, what's your personnel security for, what are you trying to achieve? What does good look like? You'll often be met with sort of blank incomprehension. Sometimes there'll be somebody...somebody muttering about finding rotten apples or bad apples, which is, you know, a deeply misleading metaphor. And I can say why I think the whole rotten apple thing is best forgotten about... I think when it comes to personnel security, the old adage about ‘culture eats strategy for breakfast’ is precisely the wrong way around. I think strategy eats culture for breakfast in this case. It's fine, worrying about your culture if you know what you're trying to do, if you've got a strategy, but if you've got no strategy, there's no point in thinking about what your culture should look like. And often there is a lack of strategy.
There are at least three different things that you could aspire to do with personnel security. The obvious one, the one that actually does the job for most organisations is the good old standard securocrat goal of reducing risk. It's about understanding and managing risks, insider risks. The purpose of the security is to understand the risk and then mitigate it or manage it. Don't forget the understand bit. So many organisations just plunge into action - risk registers with milestones and action plans - without stopping to think about, ‘what's the problem we're trying to solve?’ And that's fine. I mean, reducing risk is a good thing to do. Stopping bad things from happening - good thing to do. But it can seem, and indeed is, a sort of slightly negative view of it. And some organisations baulk at spending a lot of money and putting up with a lot of inconvenience in order to stop bad things from happening, particularly if they're sceptical about the reality of the risk.
I think trust is the way forward on this. So, I think the universal currency of insider risk and personnel security is trust. The purpose of personnel security is to ensure that the people that you trust are sufficiently trustworthy and remain so. And your organisation is building higher levels of trust.
The universal currency of insider risk and personnel security is trust.
And that will not only reduce your insider risk, it has all sorts of other business benefits as well. There's a lot of evidence for that. It's also a way of making your organisation more resilient. So, in my experience, some organisations are much more receptive to the idea of personnel security being about building trust or building resilience, or both, as well as reducing risk - actually doing all three.
Another basic strategic point or guiding principle is that prevention is better than cure. So much personnel security is about essentially dealing with symptoms. So, a lot of the tech stuff is about catching people in the act of breaking rules, performing transgressive acts. That's a good thing to do if people are behaving badly and causing harm, for whatever reason. You want to know that, and you want to stop them and catch them. But what you really want to be doing is getting ahead of the problem, spotting the early warning signals that will enable you to intervene, and often that intervention will be a welfare intervention. It won't be a disciplinary intervention. So, prevention is better than cure.
And just the final point, when you're thinking about how you design personnel security, it's recognising that the risk, like all security risks, is a dynamic risk. It's constantly changing and it's adaptive. You're in an arms race with intelligent human beings, and they're watching what you're doing, and they are immediately trying to work out ways of circumventing and defeating what you're doing. So, it's a continuous process. And to be successful it needs to be very swift and agile. And a lot of the processes that organisations have, particularly in government, impose these very long timescales, the result of which often is that the defenders are always on the back foot. The threat actors, who don't have to worry about the law or ethics, or budgets or regulations, have got a lot of advantages in terms of speed. So those are just some thoughts to throw out there for possible discussion. Thank you.
SW: And you have brought out a book in that particular topic.
PM: I have scribbled away and written a book about insider risk and personnel security, which has the snappy title, at the insistence of the publisher: ‘Insider Risk and Personnel Security.’
SW: Did you bring a copy of the book?
PM: I did sir!
SW: Who’d have thought….this is such an opportunity to offer this...
PM: Very commercial.
SW: Thank you. I'm going to hand the floor, if you don't mind, to Professor Ros Searle. And again, would you mind just giving us a little bit of your background?
Prof Rosalind Searle
14:36 - Professor Rosalind Searle (RS) introduction
No. Very happy to do it. So, hello I’m Rosalind Searle. So I'm at the University of Glasgow and I'm a professor there of Human Resource Management and Organisational Psychology. My background was I started off as a work psychologist, working in the motor industry, and then became an academic through a practice route.
How did I start? I stumbled into bad stuff through the route of trust, and through understanding trust, and then distrust. And through that, I had spent a long time delving into different spaces. And more recently, looking at a health context, in which we really want the person who is doing that operation to do all of the right stuff in that space to answer our colleagues. So I have done that.
But also increasingly, my role is thinking about how do we communicate, as scientists, our research, to enable practice and policymakers to change and be more informed? So I've worked with Becky, particularly developing animations. So we've got an animation that looks at ‘Why do Good People do Bad Stuff?’ And then the second one looking at ‘Silence is Golden’, which is trying to understand why do people not speak up when there's bad stuff going on around them? And we're recently working on new animations, particularly looking at the sexual violence space.
And again, really applying a preventative approach to that, looking at all of the different strands, all of the different components that you need to weave together into a really strong rope. So I always like that kind of maritime metaphor of a rope that's got lots of different strands, and each one of those is interdependent and working together to create something that's much more robust.
I think building on what Paul was highlighting, I would also be interested in adding: preventing the breach of trust. And increasingly, what we've seen is that organisations, and often government organisations, kind of build trust and then break trust, and assume that they can repair trust and it can go back. And we've come up with a new theory that really looks at preserving trust. So what are the active things that you can do to help people navigate new, difficult spaces, where things are not the same as they have been before? And you could do that by breaking trust and then hoping you can get it back? Or you can take the time, and really trying to help organisations and leaders understand what are the components of that, and why do they matter, and why is it so much more important?
So what I wanted to do different from Paul, is to draw attention to five papers that we've got at the minute that really speak to this in different ways. Because what I'm interested in, is trying to get underneath the skin to understand, what are people's journeys into doing the wrong thing? And how can we help recognise what's happening, both for them in that space, but also for organisations?
And so, we've done some work very recently, an Italian government funded study, which was a great organisation to work for, because they work with researchers in co-creating your research. So through them, we've been able to do multiple multilevel studies, both within UK cohorts and with Italian cohorts, and in particular, through that, we've identified and developed a new scale looking at organisational moral disengagement. And so, what we try to understand is: what is the mindset of individuals that makes them morally disengage from what they do, so that they're able to reframe their activities and see them as benign, or not as toxic as they are? But then we've taken that to a new level to help us understand those social and organisational spaces. And what happened there.
And what we've shown is a few things, but quite markedly, what we've shown is that organisational unethical behaviour, so where people collectively think that somehow the organisation is in a right space and yet it's doing really bad stuff. And so we've developed a new questionnaire that you can use that is really trying to help you understand what could be going on in that organisation, and then to put in things to intervene.
And within that study, what we did was, we went back, and we looked at codes of conduct, and we looked at what happens where you have a code of conduct? Is that sufficient? And sadly, what we found was that code of conducts was positively associated with organisational moral disengagement. So having that piece of paper means you go, ‘Oh yeah, we've dealt with that. We can just carry on.’ And what we were showing was actually having informal reporting routes, have much more efficacy in terms of helping people to stay doing the right thing.
What we found was that code of conducts was positively associated with organisational moral disengagement
And through a further piece of work, we had two CREST studies that were really trying to get under the bonnet of understanding in more detail: how do organisations live and breathe these things, particularly in high security contexts? And from that, we were really showing the journeys for people as they start at an organisation where there might be a whole number of red flags around them, and the risks that they then pose, that the organisation then doesn't pay attention to.
And so, when you went back, and we use an event mapping system, where we go back in time to look at: where were all the places? Who were all the people that could have done something? And what did they do? And why did they do it? Did they not do it?
Because I think in this space, it's really important to understand: what are the factors that inhibit us from stepping forward and saying, ‘there's a problem here’? And to help and understand how to make it easier for people. So, we've done a lot of work looking at those voice and silences and trying to understand those ways that people are impeded.
It's really important to understand: what are the factors that inhibit us from stepping forward and saying, ‘there's a problem here’?
So, the work that we did through the CREST funding was really highlighting where those individuals are flagged as having potential problems. And then where, how, the organisation operates, and their colleagues operate around them, actually ended up, in one of our cases, facilitating somebody who wasn't necessarily aware of what they were doing, but actually over time became a massive insider threat.
And it was a fascinating study, particularly thinking about STEM, and thinking about who is attracted to work within a STEM organisation. That might be a neuroatypical, a neurodivergent individual who then is working alongside neurotypicals, and the kind of tensions that are going on in that space.
So in particular, in one of our papers, we've looked at the role of humour, and being able to have a laugh was really important to the neurotypicals in helping them kind of defuse this problematic person. But actually, what they ended up doing was giving them an identity that made them ‘007’, so finally they fitted. But actually, they ended up priming their threat behaviour inadvertently, and they didn't realise, nobody foresaw what was going to happen in that space. And partly that was because of the role of managers and the role of people changing in the organisation, and failing to realise that there were different practices going on.
So a lot of the work that we do draws on the social information processing to say, we're social animals and we're looking at the organisation to work out, ‘Okay, that's what the paper says I'm supposed to do. How is that actually happening here? What are the workarounds that are happening in this space. And who's paying attention to it?’ And particularly, we were showing within that organisation, changes at the top, really again, destabilised the organisation in terms of building trust relationships so that people were starting to look for themselves and outfitting themselves in that space. And that had huge consequences in terms of then feeling that, ‘actually, I don't want to press that button because I know all of the problems that I'm going to cause if I say I've got a concern about somebody. So I'm just going to keep my head down. I'm just going to go and do my job. It's not my problem. It's ‘that’ problem’ and just hoping somebody else was going to deal with it.
So a further piece of work that we've literally just completed and is out, is looking at workaholism. And workaholism is a really interesting space to understand - how do people behave? And in particular, what we've started to realise is that many of the organisational practices actually might be driving, and priming, and rewarding, people to become workaholics. And so, what we were interested in is understanding voice and silence behaviours. To say, is it that workaholics don't see what's going on or that they don't record what's going on?
And through that study, again, that was an Italian government funded study, and again, multi-country. What we were able to show was it wasn't workload, it was this workaholic behaviour that meant that people were focusing on what they were doing. And again, ‘it's not my job to report that bad thing that's going on over there.’
But also, what we were able to identify through that was something called a ‘climate of self-interest'. So where people felt that, ‘I'm focusing on me. I'm focusing on making sure that I get what I want out of this place and space.’ That kind of complements and actually drives, again, motivating that workaholic behaviour. But together, that self-interest made people stop, so they could see the bad stuff, they just didn't report it. Because, again, it's an opportunistic behaviour because, ‘I have to look out for me because this place is not as safe as I would’ve wanted.’ So trying to understand, I think, the kind of social processes and the organisational processes that are going around that, in terms of helping people to report and feel comfortable reporting. But also, to think in that preventive space and understanding - are the processes and systems that we've got actually doing what they're supposed to do? Or are we relying on that piece of paper, and it's not actually fit for purpose?
So together with Karen Renaud and a colleague in Ireland, Lisa Van der Werff, we've done a conceptual paper which is the last paper I wanted to emphasise. And that is really looking at different routes into how insider threat was responded to. And what we tried to do was to look at different levels and look at what happens.
So you have the threat coming into the organisation, and at first, it's only the individual who realises, ‘Oh shit, this has happened. I’ve pressed that button. What on earth is going on?’ But then we were able to really identify how that kind of spills out, because the person isn't able to contain that, both their emotional reaction, but what happens in their immediate interactions, particularly around their manager, had a critical impact, we would argue, on what would happen to the organisation and that resilience in the organisation, and in particular was all around trust.
So are you focusing on a scapegoating process that's trying to identify who is our problem - that bad apple? Or are you identifying what is happening here? And using an open, more appreciative inquiry approach that's trying to gather information, and not blame, as a way of making a space that would enable an individual to be able to recall more accurately what's going on. Because they're not scared and they're not feeling under pressure. But also, it's allowing everyone else to see how you're treating that person, so that they then respond in a more open and constructive way to share information.
And so through that, what we've tried to argue is that this scapegoating route actually means that you are a less resilient organisation because you've primed people then to become more self-interested. Because, ‘I watched what you did with so-and-so, and I'm just not even going to tell you the things that I've seen and the things that I might be concerned about.’ Whereas the other route was much shorter and was really helping people build their confidence, both in the systems, in their line manager, and in each other, to then be able to adapt and take forward the lessons that they learned. So it was a much more resilient workplace as a result of how you were deciding: is it scapegoating? Or is it... what's happened here? With a big question, so that we can unpack that.
So we would welcome the opportunity of taking that conceptual model and helping to build it. We put in an EPSRC application, and we weren't successful in that.
We've got a book, that will be coming out very shortly, that's looking at doctor-on-doctor violence. And in particular, that's really identifying this kind of socio-cognitive approach in terms of trying to understand these mindsets, trying to understand how people think about themselves, the thing that they've done, and the environment that might facilitate that. But also, we've got a chapter there that's looking at this preventative approach, and trying to build on a preventative medicine approach to understanding: how can we approach the whole population to give them information? How can we probe? How can we approach people that we might think might be more targeted, either as a perpetrator in that space, or as the target for a negative event? And then thinking about how can we detect people early on and help pull them back? Or how can we help people return to work when something really bad has happened in that space, and they might feel that the organisation is no longer a trustworthy place for them, but other people will have seen them as well?
So it's really trying to get under the bonnet and look at all of those components of the engine, if you like, and how they all work together, either to backfire and create lots of smoke and confusion. Or to really help you become a much more agile organisation that really is able to understand, and start to think differently about what's going on in your social processes.
29:53 - SW: Ros, absolutely fascinating.
Audience Q&A
Question 1: Trust Perceptions: UK vs Italy
30:02 – SW: Alright, our first audience question is about your research in Italy. Were there any notable differences in the perception of trust between people in the UK and those in Italy?
30:12 - RS: So it's been very interesting, looking at different behaviours and how different cultures accept different behaviours. So we have noticed differences there. We've looked more at this kind of moral disengagement angle with the Italians and British funded studies, rather than at trust per se. But what's been really important about that work is, because of the level of funding that we've had, we've been able to do multi-level longitudinal analysis that's really let us understand the moderating factors and the drivers in different ways. And that's why this kind of understanding, particularly about, individuals’ moral disengagement, versus organisational moral disengagement, being two very different things. I mean, they can coexist within individuals, but they have different outcomes in terms of bad stuff.
31:06 – SW: For the benefit of people not in the room. Can you define from your lens what you mean by trust, and what you mean by moral disengagement for us?
31:15 – RS: So trust, I would argue, is about confidence and vulnerability. So it's about an individual feeling confident that they can go into a space and that they won't be taken advantage of, regardless of their capacity to monitor and control what the other party's doing. So it is that leap of faith that somebody is doing. And what we see when somebody, particularly flips to distrust, is that that becomes much more pervasive.
So, the example I often give, is that we can think about, you know, as you were saying, trustworthiness. So we can think about trustworthiness in terms of competence. So, is somebody good at what they do?
But actually, what we've seen is benevolence, which is people's care and respect, is actually a much stronger... And again, it's this social element to us. So that if I feel that I know kind of where you're coming from, I can cope with you having an off day. You know, you've got out of bed the wrong way. Because I know that your values and how you approach things... So that I'm much more resilient around being able to ride, that you're not quite yourself today, but you will be okay tomorrow.
And then the third component there is integrity. And what we see is where integrity is gone, that is what is primed with distrust. And that is much more pervasive. So that I don't even want to be in the same space as you. So that means of rebuilding the relationship is lost, because I'm not even going to look and engage when you're trying to give me all of these different clues and signals. And the person becomes much more wary and much more fearful, so they close down that interaction and pull away.
33:01 - SW: And moral disengagement?
33:02 - RS: And moral disengagement is based on an idea of Bandura. And it's about understanding how we can reframe what we've done to allow us to carry on and live with ourselves regardless. And he has four mechanisms that he's thinking there:
So one is around how we frame what we've done and talk about it. So the language that we use that might downplay, euphemise. So within medicine, one of the conversations that we have is, ‘this is an illegal activity, but you call it, sexual misconduct, which sanitises it. So it's not as bad as I thought.’
Then we can think about in terms of the outcomes and denying what's happened.
Another form of moral disengagement is around what's my responsibility? So feeling that you could defuse responding. You know, ‘I had nothing to do with this. It was the group that decided this’.
And then the final one is the target. So we either dehumanise, so that the bad thing that we've done doesn't matter. Or that that person somehow asked for that behaviour.
So there are all mechanisms that allow us to carry on thinking about ourselves as good people. But it's about how we've reframed, and that's... Understanding how people start on that journey of reframing is really important. And often, as Bandura would say, if we were able to, to interrogate and get them to self-reflect so they realise for themselves, ‘I've started on this journey and that, that's not the person I am.’ So this idea about: why do good people do bad stuff, is really important, because if we can have that self-reflection piece, then it means people are much more able to self-regulate and correct. Rather than having either sanctions or social sanctions, or legal sanctions, that are fear-based mechanisms. So, ‘I'm stopping my behaviour because I'm scared of being punished or being ostracised’. Rather than, ‘I see this as something I want to change in me.’ Does that make sense?
Question 2: Codes of Conduct & Organisational Type
35:09 – SW: A member of the audience asks, when you mentioned that codes of conduct might be associated with increased levels of distrust or insider behaviour, could this be linked to an organisational size and type? For example, are codes of conduct more common in large or corporate organisations? And if so, could that influence insider risk? And can such effects be mitigated?
35:34 – RS: I think that's a great question. I think what we've seen, particularly around the code of conduct, exactly as you say, where you get a larger organisation, you need to put that in place. But I guess our concern is: is that an assurance? So I've got this piece of paper and therefore we can put a big tick in that box, rather than understanding what are all the things that go around ensuring that ‘that’ won't happen. And so that's why, what we were finding was, where you have a more informal route, that people are able to report; it was much more effective at reducing that moral disengagement, rather than feeling, ‘oh, we've got this piece of paper so we can all sign up for this stuff. But actually we're doing what we would like to do, and living our best lives.’ As I keep saying to various staff members outside. So people not saying that ‘actually my behaviour is negative’ in this space because, well, ‘we have this code of conduct’. And I sometimes think, particularly as organisations grow in size, and particularly thinking about professions that need to articulate what is it about being a professional, about codification of behaviour? Rather than ‘we're all good chaps here’. And understanding that behaviour, is what's required.
36:51 - SW: Paul would you like to jump in there? Is it clear?
36:54 - PM: I sort of recognise all of this. And, I agree. I mean, on the policy thing. Definitely, it's been my experience that in many organisations, the existence of policies is a substitute for actually doing anything very much. So I think pretty well any organisation of any reasonable size would have policies about, what's acceptable and they would have codes of conduct.
I think where it starts to get awkward is if that's all they're doing. And in particular, if there's a discrepancy between what the words on the wall say and people's everyday experience. And I think there is evidence for this, that if there's a real mismatch between the ‘motherhood and apple pie’ stuff in the code of conduct, and people's experience, for example, of the way managers behave or the leadership behaves. That in itself is very alienating and is likely, other things being equal, to sort of pump up the insider risk of it, because it will make people distance themselves from the organisation. So there's that active distrust. And I think it's important. I think distrust isn't just an absence of trust. It's a positive sense of not trusting an organisation or another person.
I think there's a milder version of it around, again this is more impressionistic rather than evidence based. But one hears lots of opinions expressed about changing attitudes towards employment. It was really brought home to me the other day, talking to somebody about the culture inside their organisation, where they've been a long time. We seem to have shifted to a world where employees now see themselves as consumers. So in other words, it was, what I would think of as a loss of a sense of mission, the identifying with the purpose, and more a sense of ‘what's in it for me’.
So that's not actively malevolent or malign. It's not distrustful, but it is a bit disengaged. It's basically saying, you know, ‘this organisation has certain responsibilities towards me. If you sort of pay me, and look after me, and kind of let me do my own thing, and I don't really owe it very much back in return.’ I think if the organisation then behaves unethically, that can then really kind of fuel, distrust.
On trustworthiness... I completely agree that obviously there are whole university departments dedicated to debating the meaning of trust and trustworthiness. So, I completely agree with your take on trust. It's a psychological state. It's not an objective thing. It's accepting that you're putting yourself or your organisation at risk. You're making it vulnerable by accepting that the other person, the other entity has positive intentions towards you. I mean, you've got to trust people and things, otherwise, as somebody once said, you couldn't get out of bed in the morning if you couldn't trust anybody or anything.
Lots of research on public trust in institutions tends to show, that public trust in institutions... Actually, the ones that are holding up best seem to be employers. Partly because all the public, you know, trust in governments and journalism and so on, is sort of declined in recent years.
Trustworthiness is kind of what, what it suggests, which is the extent to which a person or an organisation, or an AI, is worthy of being trusted. So you've got to then say, well, how do I judge?
I think personnel security is all about judging trustworthiness. You know, when you're trying to recruit somebody, you're making judgments. Even if you don't know you're doing it. You're making judgments about are they trustworthy now, and will they remain trustworthy? And, if you're clever about it, what should my organisation be doing to maintain and build their trustworthiness?
So the rotten apple metaphor, I think, is pernicious, because it encourages people to think that trustworthiness is a sort of inherent feature of the individual, that they kind of carry around with them. It's like A-level certificates or something. It's not like that, obviously. And the evidence, and there is evidence, in fact. A big CPNI study from some time ago that, from known cases at least, a large majority of active insiders become active insiders after they join an organisation. So quite often, it's going to be as a consequence of what happens to them during their time in the organisation. Not always. But that will have a lot to do with it.
So trustworthiness is really important. How do you measure it? Really difficult.
I quite like, there's a fourth factor, which I think... the three that you mentioned are crucial. So there's benign intentions - does this person mean well towards me or my organisation? You need that.
Integrity - do they behave when I observe them? Do they behave according to ethical principles that I find acceptable? So somebody might mean well towards me, but they might be, you know, a complete scumbag in terms of their behaviour with everybody else. You wouldn't trust them.
The third one is competence. They might mean well and have loads of integrity, but they can't do the things I'm trusting them to do. So they have to have competence as well.
I think the fourth one is reliability and consistency. So there are individuals who who've got benign intentions, integrity, and competence. But for various reasons, perhaps because they've got, for example, a chaotic home life, they’re just not consistent. They promise to do things and then don't do them. So, they don't have malign intentions towards you. But you can't rely on them to do what you've trusted them to do. So I think that is an additional element to thinking about trustworthiness.
43:32 – SW: Thank you. Can organisations, listening to your list of four, be seen in the same light?
43:38 – PM: Oh, yeah. Absolutely. I think trust and trustworthiness - one should think about entity. They might be individual humans. They might be organisations, governments, or brands. They might be AIs. How you judge them is going to be slightly different, but I think concept is similar.
So high trust organisation. And something I've experienced of going from a very high trust environment to a very low trust environment. I'm not going to say which ones they were! You really notice the difference. The things you might look for are different, but it is ultimately all about trust.
A high trust environment, therefore, is one in which, in an organisation, say, people trust each other. So colleagues are looking out for each other. They're not constantly, sort of, looking over their shoulders. They trust the organisation. The organisation trusts them. Stakeholders and the public trust the organisation. High levels of trust. Everybody wins. Lots of research showing that high trust organisations perform better. They make decisions faster, they’re more innovative, more creative. They're pleasanter places to work. They do better with recruitment and retention of talent. And they have lower levels of insider risk.
So I think the trust lens for personnel security, which is a very positive one. If you're trying to sell it to the CEO of a rapidly expanding tech company: ‘Well, I haven't got time for this, blooming insider thing’, you know, security can be a bit of a grudge purchase. A lot of organisations are quite hardened to this because of the history of cyber oversell. You know, ‘if you don't spend a load of money, all these, you know, terrible things will happen’. People can sometimes respond to that by saying, ‘I don't really believe it. I mean, you know, I think it'll be fine. We're not going to spend all this money.’
If, however, the proposition is, ‘I'm going to make all this horrible risk not go away, but certainly reduce to a tolerable level. And there'll be lots of business benefits. And this will be a better, more successful organisation, and it'll be more resilient.’ Some organisations have got a big resilience agenda. I think, I wish a few more had, recent events refer... But they're all looking at the same problem in a different way.
46:24 – RS: And just to come in there, what we found when you start changing that trust, is that your granularity starts to reduce so that your means of discerning... we found particularly organisational trust. It tended to be that you had a collapse of competence and integrity, versus the respect and care. So, ‘do you look after me?’ was important as an employee. So you didn't get the kind of three or four mechanisms. And we've recently done studies looking at politicians. And that again because they're more removed. It's harder then to discern particular aspects of that.
But we've just finished a four country study, three way, three to four ways, looking at the New Zealand, the UK, the US, and the German elections, and looking at where do people get their information from? How much are they trusting political actors in that space? And then how does that shift as the election comes? And then a few months later? To try and understand, you know, exactly that: why are we losing trust in our politicians?
47:36 – PM: Like also, and I think there are big cultural differences. We've done some work with organisations in Nordics. And they, quite reasonably, see themselves as very high trust societies. And the culture is very much about valuing trust. And one of the practical consequences of that, is that, both in terms of the law, and in terms of what's culturally acceptable, organisational practices, that would be standard in US corporates, for example, which is essentially surveillance of the workforce, is totally unacceptable in some of these other cultures. So there are practical consequences of national and cultural differences in attitudes towards trust.
It's also recognised, of course, that it's easier to build and maintain high levels of societal trust in relatively smaller populations. Where not quite literally, but sort of figuratively, everybody knows everybody. But yeah, there are big, big cultural differences.
Question 3: Insider Risk Across Global Cultures
49:08 – SW That's interesting...Following on from that, an audience member noted that the examples discussed focused on Western or European contexts. Is there any evidence from South or East Asia, or African cultures? Are there significant differences in those regions?
49:27 – RS: There have been a variety of different studies in it, and looking at that in different ways. I think one of the most interesting things is around the role of that social space, particularly around collectives. And the fact that you are a more collective unit if you're in particular societies that are more collective. So that again, the understanding voice and silence behaviours is really different then in terms of people not telling you stuff because it is breaking the trust to tell the bad thing. And that can also be about even, how you articulate that being somehow shameful. And so people not hiding... people actively hiding what they'v e done in different ways.
I do a class with my MBA where we divide into different cultures. And they are given a case study. And it's absolutely fascinating and quite shocking for each of them to go, ‘oh you uncover it’ versus ‘you sweep it under the carpet.’ And then trying to understand why would that be acceptable, and why would it be absolutely mortifying, to reveal this bad thing to the whole light of the day?
So I think particularly where you're looking at multicultural workforces, those differences - and there will sometimes be tacit differences - are really, really important. And again, surfacing those and having that kind of codification conversation around, particularly vulnerabilities, and why are we doing this and explaining, can be really important in terms of getting people all on the same page.
51:10 – SW: Just to follow up that question. Would that be the same with moral disengagement as well?
51:17 – RS: Yeah, again, and that's kind of why we did the study in Italy and in the UK because we were kind of anticipating, you know, because of particular Cosa Nostra and other codes of conduct there, that people might feel much more comfortable with it. But it was really pulling out that people do notice, and it does lead to these negative behaviours.
And I think, going back to something that Paul said, it's about authenticity of organisations. So if I see that this group of employees are treated this way, whereas this other group are treated completely differently. Then again, it's that that notion of vulnerability that people end up thinking, ‘wait a minute, I could be in that group rather than in this group - so this is not a safe place now.’
Question 4: Preventative Measures & Unintended Consequences
52:03 - SW: Another audience question relates to preventative approaches to insider threats. While preventative measures such as organisational screening can be helpful, they can also lead to unintended consequences like false positives, stigma, or increased anxiety. So, is trust the key to addressing these issues? And how can we implement preventative strategies whilst avoiding those unintended outcomes?
52:30 – PM: Yes, you're absolutely right. I think it's really important to hang on to the distinction, which I think is often blurred in a lot of the analyses of this, between thinking of the problem at the level of the individual; thinking of the problem at the collective level of the organisation. So, how you would go about detecting and measuring threat or risk? ‘Discuss.’
In an individual, ‘is this person okay?’ Is very different from the way you would look at the organisation and say, ‘how are we doing? What's the temperature in the workforce?’ I think preventative measures, can work quite well at the organisational level. And a lot of them are around, don't do stuff that's going to really upset people unnecessarily. Thinking about... it’s so often business decisions. I've sat in meetings where people have, you know, had a brief conversation about insider threat, and then they've gone to talk on about the redundancy scheme without even connecting the two. So, think about what the organisation is doing and how it might affect the risk.
At the individual level. You're absolutely right. You've got to be really careful. This is where technology, I think, has got a really important role to play. At the moment, I'd say it's still a work in progress, and a lot of it has been oversold. So there is a real danger with a lot of the current sort of processes for detecting risk at the individual level, that you will get a lot of false positives.
For research, we've been doing some work on what would good research look like? So much of custom and practice has been drawn from, essentially, case histories. Case histories can be very illuminating. You can learn a lot from a case. Thinking, well, what were the precursors? What went wrong? How did the controls fail? What were the consequences? How might we do it better next time? So it's a very valuable source of insight.
But, of course, it's fraught with problems. One of them is, the whole hindsight issue. So pretty well every case history. I mean, there's a load of them in my book and there's a load more I didn't put in. Pretty well every case history that's actually got out there, where a risk is materialised: somebody's done something bad, they've been caught, they've been sacked, or prosecuted, or sent to prison, or whatever. There's all these red flags, you know, some spectacular. And everyone looks, ‘goodness me, how on earth did that organisation fail to spot this, and this, and this, and this, and this...’ Looking back on an individual who goes on to do something terrible. The real flaw in that, of course, is that if you just picked 100 people at random out of the workforce, you’d probably find a lot of those red flags, and most of them won't go on to be active insiders.
So what, you're really looking for, the magic in here, which I don't think anybody has yet quite got hold of, is what are the leading indicators that discriminate between just all the noise and stuff that's going on here, in any workforce, even of, highly trustworthy people? Even in a government vetted sort of context. As I said, if you just picked a random sample and really kind of rifled through what was going on, you'd find a lot of pretty rum stuff, probably. And most of it won't matter.
There's an awful lot of chance and circumstance in the development of insider risk. So many of the models out there are so simplistic. They're very causal linear. If this, then that; and if that, then this. If that happens, then this will happen, and this will happen. And that's not the way human behaviour emerges, in my view. So the nirvana, the kind of, the Holy Grail here is to find leading indicators that really do give you a handle on emerging and future risk. And I'd say the evidence for that is not great at the moment.
There's an awful lot of nonsense out. There are some hilarious examples of threat indicator lists. You know, ‘how to spot an insider effectively; checklists of personal characteristics’, some of which I mean, just clearly being made up by somebody on a wet Tuesday afternoon. And have, and I mean, I kid you not, if I was doing slides I'd show you some examples. There's one from a US government authority where it's got things like, you know, changes in personal hygiene, and feeling tired, and failing to return a library book. Stuff like that. I mean, really mad stuff.
So in amongst all that noise, there will be some indicators. If you know what they are. And I don't think anybody really does have a great idea what the best ones are. Then you can begin to think about the intervention. As I said earlier, I think we're more or less, sort of, on the same page on this, which is, given that the vast majority of them are not on a pathway to become terrorists, or spies, or fraudsters, the intervention that would be appropriate for most of them will be some sort of management or welfare intervention. And quite often that'll just make the problem go away.
False positives are a real problem though, for that whole issue of injustice. If you're in a workforce and somebody gets dobbed in and the next thing you know, they're being disciplined or sacked because they've been accused of being an insider threat. There's a good chance they're not. And that will not only make them feel very upset and aggrieved, it'll make everybody, all the bystanders, their colleagues feel that. So you've probably made the situation a lot worse by that kind of clumsy intervention.
A lot of the technology that's currently being sold as inside threat detection, some of it is great, don't get me wrong. And some of it is great and it's getting better very quickly. But some of it has massive false positive rates. And it depends on what you then do. And the problem with any security system where you get a lot of false positives... This is true in physical security as well. With things like alarms, you know, if the blooming alarm keeps going off all the time, guess what happens? People just ignore it. They effectively switch it off. And if you do that with insider stuff, that's clearly a problem, if you're acting unjustly because most of your positives are false positives.
I'll stop at that point, the base rate bias of... when something is relatively rare. Same with medical diagnosis. If something is rare in the population, whether it's a particular form of cancer, or whether it's an active insider, and you have a detection process that has even a small error rate, most of your positives are going to be false positives. So handle with great caution, I would say.
Most of your positives are going to be false positives. So handle with great caution, I would say.
1:00:22 - RS: I think just to build on that, I think there are things that we can notice. I'm really interested in workers as they become older, in the sense that their experiences of injustice, particularly around pensions... And certainly that was what we were finding in our CREST-based study around people - people's financial futures were suddenly being quite dramatically changed by organisational decisions and that kind of priming people.
But I think there's also things about how people make sense. And that's why that kind of moral disengagement is really important about when they view the organisation and that need to restore justice. So I think places that justice is breached are really important to be able to have an early conversation with someone that can often travel back.
Places that justice is breached are really important to be able to have an early conversation with someone that can often travel back
Becky (Rebecca Stevens, CREST Communications Director) did an animation for us when we were trying to identify different types of people, again, from our CREST study. And with that, we found, this kind of person who became more and more entrenched in their thinking and more and more vitriolic, and people just backed away from them and wouldn't have the conversation. But I think we can do so much more to help managers have those difficult conversations that might often feel really daunting. I've been a line manager very recently having those. But actually if you do have them, they can be hugely helpful for the individual to giving them that space to kind of unpack and relook at something, and then pull back from something. So rather than them becoming more entrenched in their views and feeling that there is this injustice that they're seeking to resolve through whatever means.
But I think particularly older... we're finding out with surgeons, why do surgeons get to a particular age and space, and their particular place in the hierarchy? Where, is it that people don't criticise them, that people don't...you know, when you're at the top of the organisation, people blow smoke up your arse. And so you believe your own rhetoric. And does that change how you're viewed? So that you morally disengage? Because no one goes, ‘wait a minute. What's happening there?’
1:02:39 - PM: Friend over there was smiling. I suspected the same thing as me, at mention of injustice amongst older personnel over pensions. I'm guessing you were thinking about Spycatcher?
(...)
Audience member: I was thinking about my own pension.
PM: Oh, right. I was thinking about Peter Wright. Those of you of a certain age will remember that from the 1980s, the great Spycatcher scandal. Very, very disgruntled MI5 officer who wrote this scandalous book called Spycatcher which the government tried, unsuccessfully, to stop. That was allegedly about his pension. I seem to remember.
Question 5: Welfare Support: Resources, Mental Health & Trust
1:03:18 - SW: Here's a follow up from the audience: On the topic of welfare and preventative approaches, what are the resource implications? Can large organisations implement effective welfare procedures, such as mental health programs, without overwhelming their resources? And if mental health is linked to insider risk, what kind of trust would be needed for individuals to feel safe disclosing personal issues within the organisation?
1:03:44 – PM: It's quite a challenge. I mean, I think two things. First, I'd say generally it's about using whatever welfare arrangements are already in place. So, most employers, decent employers, would have some processes in place for helping individuals, who... going through whatever issues might be affecting them.
Secondly, the alternative is what? The alternative is, some kind of disciplinary intervention, which, is a good chance, apart from being kind of ethically dubious, it might actually make the problem worse. I wouldn't envisage it being a sort of completely separate and discrete operation. I would say it's something that people need to trust, though. You're absolutely right. Given that people, other people, are the best detectors of early- stage insider risk - I think that's still true. It may not always be true. I think it's still true at the moment. Then clearly you need to encourage people. So you need reporting that works. And you know, from brilliant research that Ros and colleagues have done, that if people don't trust their organisation, you can have all the reporting channels you like, and people won't use them.
So, it is about the organisation practising what it preaches. Saying to people, ‘if you approach us admitting that you yourself have got problems, we will handle it sensitively, within reason. Equally, if you're concerned about a colleague and you approach us, you can trust that we will handle that sensitively. You're not going to get them into trouble and you're not going to get into trouble.’
It's quite a big... I mean, you've got to have quite a high level of trust before you're going to do that. And famously, of course, trust is hard to build and very easy to destroy. So you've got to really mean it.
1:05:44 – RS: And I think mental health is a really interesting space to consider, because your means of self-regulation are impeded in that space. And understanding how an organisation can promote that overwhelming for individuals, is something that we're really not talking about. So, while we look at interventions, we look at them for individuals rather than understanding and, well, the organisation is potentially complicit in these spaces that it's requiring people...
And that's why we've been looking at this workaholism thing to understand: are modern organisations driving more and more people to have this? That has huge consequences in terms of breaking up families, as well as impacting on the mental health of individuals. So I think it's really important to understand that.
But as you were saying there, the consequence... Somebody going to a disciplinary, has huge consequences for everyone around them. And how you then handle that disciplinary can create multiple insider threats by people feeling that that person... So that kind of scapegoating model that we were using with the insider, cyber attacks, really comes into play there.
Whereas, creating a space that allows people to be their best selves, even when things that are happening, maybe in their personal life or in the work that they've been having to do, compromise that for them. And it's that kind of dynamic. So that's what I mean by having a line manager that can have those conversations with you, and maybe give people space, give more individual attention to recognise things, can, I think, have huge dividends. I think the challenge is that for many organisations, there's not the time to have those relationships. But actually they are hugely important and beneficial to understanding.
Creating a space that allows people to be their best selves, even when things that are happening... compromise that for them... can, I think, have huge dividends.
And I think, we kind of skirted around it thinking about AI. AI, particularly from taking over line management responsibilities, doesn't have that nuanced understanding that, ‘Wendy can't work on a Wednesday because she has all these childcare things’ or whatever. And that potential to then breach, at multiple levels, trust, through that automation of decision making - that is really about personnel and managers.
So, I think while managers might favour some groups. And again, that's where things like EDI are hugely important. You know, and high workloads. But high workloads where you have good communication between staff and employees, and where you have high quality EDI training, are spaces that can help the whole workforce be more resilient.
1:08:39 – PM: Yeah, I agree with all that. I would say, obviously, there are limits and arguably there are some organisations where the balance has shifted, possibly too far, where they knowingly tolerate quite significant risks because they are too risk averse to deal with it.
There's a sort of slightly legalistic approach to this that says we mustn't do anything that might result in a complaint or an employment tribunal. It's a very difficult balance to make. But sometimes tolerating and sitting on a known risk of say, quite significant criminality in somebody's private life... Somebody, somewhere has got to make a judgment about how far...how far is this all going? It depends on what the organisation does, and so on, and so on. But it's not limitless. This accommodation of people's concerns. It's got to be humane, but it's also got to be proportionate, I think.
1:09:43 – RS: And we've definitely found in our ‘Trust and Control’ paper, we were showing that an organisation that tolerates that individual that is always allowed to get away, has huge consequences because it is not a trustworthy organisation. So again, that's stepping up to a situation to say, here's why this isn't acceptable.
1:10:04 – PM: And there is evidence from employee surveys in the public sector. One of the questions that always tends to produce quite negative responses is about how well the organisation - and this is lots of organisations, not one - how well it manages performance. And what that tends to mean, when you dig down into it, is essentially:
‘I do my job really diligently, and this joker next to me, you know, hardly ever turns up and is lazy and incompetent, and they're allowed to get away with it. And it's not fair.’
That is quite often the underlying concern. And you're back to that sort of alienation, the sense of injustice, ‘it's not fair. This organisation is not treating us in a fair way.’ Fairness doesn't mean everybody gets treated equally, of course. But there is this perception that, you know, there is a degree of fairness.
Question 6: What Does Loyalty Look Like?
1:11:04 – SW: Can I come back to that point where we bring up culture and fairness? I'll come back to that. I just got a quick one. If we accept that trust fosters loyalty, what does loyalty look like?
1:11:14 – PM: So there are lots of definitions. Like all these things, there's a kind of competing set of definitions. I think loyalty, the best definition I know is, it's something along the lines of: somebody in a relationship, shall we say, an employee in a relationship with the employer, stays with that relationship, even in the presence of a potentially better offer. So something about sticking with a relationship. It's not the same as trustworthiness. I don't think.
1:11:51 – RS: I mean, I did it with you. Looking at that loyalty question, and I think part of the difficulty is where it might be viewed as a kind of blind trust. So that, are you creating a space where an individual might feel trapped in that? So that there are obligations that they feel upon them that then impede how they might see their potential future.
1:12:18 – SW: I’m thinking of football supporters. I mean simplistically, in terms of your loyal... blind loyalty for your team. You know it's going to be in the Second Division or the Championship. I'm looking at teams who will drop... Southampton and Ipswich, who will be in the Championship next year. But we're still loyal. So we get the idea. I don't trust them at all. They're rubbish, but I am loyal to them.
1:12:40 – RS: So I think it's about understanding the other components that are around that. So if it's that you're loyal and you feel pride in that organisation and the values that it has from it. That's very different to feeling that you're loyal because there aren't other options for you, in that you're trapped in this space.
1:13:00 – SW: Would that be significant for cultures, particularly in South Asia, or the Far East, or Eastern cultures?
1:13:08 – RS: I think it could potentially be. And that's why we've been kind of exploring this, organisational moral disengagement to see, because in a way, that's the ultimate loyalty that, we turn a blind eye. And certainly the organisations we were drawing on, to build that, they were knowingly doing things that adversely affected the organisation, adversely affected customers.
And that people were starting to view those customers in a particular, you know, ‘the Muppets’. And you can hear all of these kind of Friday afternoon conversations, when you're dealing with particular spaces and places where it's easy to view the thing, the work that you do in a particular way and to kind of lose touch with it. And to not see the consequences of your actions going forward.
So if that's what is breeding...I think you have to be careful. Is loyalty also breeding that moral disengagement that we go, ‘we're just following the rules. These people deserve this’?
You know, it's that wider space, as opposed to something that is around pride. Because I think actually, pride is a hugely undervalued resource for organisations, and feeling that they're in something much bigger.
1:14:34 - PM: I think that's really important. Again, there are different ways of framing this, but I completely agree with you. I think... so organisations where people identify with, what I would think of as the mission or the purpose of the organisation. Other things being equal, that's a more benign environment than in which people have a sort of cynical disregard for, or an actual disdain for what the organisation is doing.
The reality, of course, is that it's much easier to identify with the mission of some things than others. If you're in healthcare, for example, if you're a surgeon, if you're in medicine - it's fundamentally easier to identify that as a good thing to be doing. It's socially useful. It's beneficial. It's important. Than what David Graeber referred to as ‘bullshit jobs’, which a lot of people have. They have jobs that are, if they stop and think about it, they might regard as a bit pointless. And if they cease to exist, those jobs, nobody would care very much.
But the organisation can make a lot of difference. I think one of the things that slightly worries me - this is again, a personal opinion - is parts of the public sector, a kind of excessive focus on process at the expense of a focus on mission or purpose. So that the organisation actually has a really important function. But all the employees here day in, day out, in stuff about procurement and HR and whatever, as though that was the, you know, demonstrating best practice in some process was really what it was all about. And those things are important. But obviously that's not why the organisation is there, fundamentally. But I think to motivate people and give them that sense of identity with the organisation, and pride, potentially. I think organisations do sometimes remind people gently of, you know what, why, they're there, and why it’s useful, and beneficial in what they're doing, whatever the organisation is.
1:17:00 – RS: And I think linked to that, I do a lot of... I sit on the board for the Poverty Alliance. So we're dealing with organisations that are making decisions around paid living wages and minimum wage employers. And one of the big things that comes out from that is job design. Thinking about how do you design a job? Because I think every job can be a well-designed job, even if it's a routine job, that gives people dignity, and purpose, and meaning. But that takes some care and some insight. And my MBA class, that's the first class we do is looking at purposeful organisations.
Every job can be a well-designed job, even if it's a routine job, that gives people dignity, and purpose, and meaning. But that takes some care and some insight.
And I think every organisation can be a purposeful organisation. And harnessing that purpose creates all of these positive dividends in terms of pride, in terms of belonging, in terms of identity, that then link in exactly to that loyalty space. And we really need to be harnessing. But what scares me, particularly around AI and the drive to change work, is that it is impoverishing work. And that has consequences in terms of mental health, in terms of the disconnection, particularly of young people, because of them not seeing their place in life as being necessarily a positive one.
Question 7: Reframing Risk: Threat vs Vulnerability
1:18:30 – SW: The next audience question is about mental health and vulnerability. Is there a case for reframing the conversation from one of threat to one of vulnerability? Could HR or welfare teams play a more creative role in supporting individuals, particularly those at risk, to help them stay engaged and valued within the organisation?
1:18:52 – RS: I think that's an interesting question, and we've been looking at vulnerability in a different project that I've been working on and kind of understanding situational vulnerability so that we don't label somebody as vulnerable. Because I think that can be very pejorative and that kind of stigmatisation, you know, thinking about mental health.
But it goes back to the design of work and understanding how situational vulnerabilities can then exacerbate issues for people going forward. So I think there could be a role, but I'm wary about, if we went wholesale, that's about, ‘oh, you're vulnerable and therefore here's this suite of things.’ Rather than helping an individual, be individual, and get help and support, and feel agency and control in that space.
I think there could be a role, but I'm wary about... ‘oh, you're vulnerable and therefore here's this suite of things.’ Rather than helping an individual, be individual, and get help and support, and feel agency and control in that space.
Question 8: Reframing Risk: Reporting Concerns
1:19:44 – SW: And continuing that thought. Is there a way to encourage colleagues to report concerns out of a sense of care or support, rather than purely for disciplinary reasons?
1:19:55 – PM: So I think two things. I think first of all, language is really important. I'm not a fan of the sort of, excessive use of terms like insider threat. If what you're trying to do strategically is to build high trust organisations, labelling everybody in the organisation as an ‘insider’ because they have access to something. And then the ones you're worried about as ‘insider threats’. I mean, although technically that's correct, it sends a slightly chilling message. So it may not be something you'd want to use in public communications. It might be something, you know...
The other point, though, is, most of what goes on under the banner of personnel security doesn't happen in the security function. Whether it's detecting the risk, or doing something about it. Most of the information that you need, isn't sitting in the security function. It'll be sitting in HR. It'll be sitting also in cyber security, for example, in an organisation where people spend a lot of time online. And one of the real challenges for organisations, is actually just drawing that information together. Over and over, and over and over again, I've seen in organisations, where I've been advising or reviewing, these pockets, these silos. All of which are sitting on bits of the jigsaw puzzle, but they just don't come together. So nobody's got the full picture, least of all the people - the personnel security function - who often don't find out about it until far too late. Most of the sort of stuff that's brewing is somewhere down in HR, or audit, or compliance, or legal, or something.
And then the response, what do you do about it? It will often be for the local management, or for HR, or welfare to deal with it, not the security function. Obviously there will be cases where somebody is really seriously a bad person. Where you do need a security response. But a lot of it will be below that threshold.
Just an aside on the language thing. What’s in language? I don't like this sort of binary distinction that's often made between insiders being either malicious or accidental. I think both those labels are quite unhelpful in a way. I don't think anything is ever truly accidental. And the malicious label? Well, how do you know? You can tell whether somebody does something deliberately or not, by observing what they do. But your definition of malicious - is in the eye of the beholder, really.
Just on the news, a couple of days ago, you may have noticed, Oleg Gordievsky just died. A hero, famous spy, helped to avert World War III in the 1980s. Russian intelligence officer working for the Brits. Hero from our point of view. From the Soviets’ point of view, as it then was during the Cold War, a malicious traitor.
People sometimes cause harm in organisations unintentionally. They click on an attachment - and get better technology, would be my answer to that. They don't mean to cause harm, but they cause harm. And then, at the other extreme, you've got your dedicated Russian spy, or Chinese spy, or terrorist, or criminal working for an organised crime group who's stealing all your money. Very deliberately and consciously doing stuff. Are they malicious? They're intentional. Who knows why they're doing what they're doing? I don't think you can ever get inside the head of an individual insider.
Question 9: Risks of Purpose-Driven Cultures
1:24:20 – SW: An audience member raised the question about organisations that rely heavily on a shared sense of purpose. Could an overreliance on purpose-driven culture cause organisations to overlook other potential risks?
1:24:34 – PM: So I don't know the answer to that. I mean, you may know of some research on that, so I don't know what the evidence, if there is evidence on that. But my guess – so it's a really interesting point. My guess would be that, if all the organisation was relying on was a sort of complacent sense of ‘we do good’, then that could be asking for trouble.
I used to be on the board of the Charity Commission, and I think, I would say, my impression was, that sometimes when charities went wrong through poor governance - it was... and it was often governance issues. It was because perhaps there had been a bit too much - this is a tiny minority, mind you – there'd been perhaps a bit too much reliance on the, ‘we're doing good so we don't have to worry about, you know, competent management, and leadership, and policies, and decision-making, and budgeting. We're doing good’. So I think that can be dangerous.
I think if anything, my, again, this is just impressionistic, in parts of the public sector at the moment, is more that it's gone too far the other way, that there is an excessive focus on process and not enough is said very much about mission and purpose. You need both.
Question 10: Insider Risk Across Generations
1:26:06 – SW: Here's another audience question: when looking at insider threats, are there noticeable differences across generations? For instance, how might risk indicators vary between employees joining at 16 versus those working into their 70s?
1:26:23 – RS: We did some work looking at an energy organisation. And it had some really interesting questions around young people coming in. And what we found in it, was actually they had two different groups of young people. So they had young people who were coming in as apprentices who were leaving with qualifications and their ‘dirt’. And then it had a bunch of graduate entry who had got into debt from their studies, and were expecting the organisation to deliver on particular promises, that it then didn't deliver.
And it was really helpful for this organisation to suddenly see there's not, that ‘our young population’, you know. Again, they were bringing in lots and lots of young people because they had this demographic that was going the other way. So they needed to replace. But it was a real revelation to them about, well: there are these two very different groups who have very different expectations and attitudes towards you as an organisation and an employer, and very different threat levels. It was quite a surprise for them.
But I think it's about having conversations and trying to understand: what are people's expectations, and bringing people together. I think there sometimes can be a false issue.
As an older person, there can be some false expectations around the roles of older people in organisations and what role they can play. Particularly with younger people, and within mentoring and, and all of that, where there might be more guidance, there might be more support that could be offered. And actually more meaningful for them in terms of giving stuff back, in terms of the organisation. So I think there can be huge missed opportunities that give people back a sense of purpose and meaning, at a time when maybe their type of job roles are changing very profoundly, and they're feeling somehow out of kilter with the organisation as a whole. So that I think it's more complicated than just going, ‘oh, you're an X’.
1:28:32 – PM: Yeah. I agree. I agree with all of that. I think it's always tempting to put people into categories and they're ‘one of those’ and to attach labels to great groups of people, which, is tempting but often, often very misleading.
I mean, clearly we all differ in terms of our experiences. We differ from the moment we're conceived in terms of what genes we've got, let alone what happens to us afterwards. Our experiences, our personal relationships, our personality, attributes, education, family dynamics, all of that will be in the mix. Age will be part of it. The longer you've been around, the more stuff will have happened. You will have experienced, things that might, may, have affected you long term.
And to get some of those patterns, I think, at the population level is quite useful. I'm very dubious of trying to do that at the level of an individual. I've heard people say things like, ‘oh, well, you know, childhood trauma, that's the big one. If you identify that, you find people.’
No - I mean, it's a systems problem. There are lots and lots of variables. There's stuff to do with the individual. There's stuff to do with their environment. The interaction between them. And then just a lot of happenstance. In order to be an active insider, you've not only got to have the motivation to do it. You've also got to have the opportunity to do it. You've got to be in the right place at the right time. The controls have got to fail. All of those things. So there's a lot of randomness in it.
One of the kind of mindsets in government... I'm on something called the Board of the Leadership College of Government. We talk about this not in the context of security, but in terms of leadership and management, which is this blooming ‘linear model of change’, which government departments are so married to. Which basically says: we want to achieve a policy intervention. We want to, let's say, randomly, you know, alleviate child poverty, right? So and this is, I'm not saying this is what the current government is currently doing. They have moved on a bit. But this has been a mindset. You know, we're going to choose this intervention. And over a three-year period we're going to have these milestones. And these are our metrics. And at the end of it, we will have achieved a 23% reduction in this outcome measure. You pull the lever, and you wait for it all to happen.
That's just not the way the world works. And I'd say the same is true with insiders. There are probably quite a lot of people around in workforces who’ve got all the predisposing factors to be really bad insiders. But they won't be, because of other stuff: protective factors, lack of opportunity. Something else happens that makes them change their mind, and so on.
So I think that's not a counsel of despair. I think research can, and does, highlight factors that do make a difference one way or the other. We tend to focus on the ones that drive up the risk, but there are almost certainly things which tend to reduce the risk. We should try to maximise those where possible. Of course, we should do that, and we should focus on the things that actually work, rather than just sort of making up a list of, ‘well, if we do this, then that will happen.’
We tend to focus on the ones that drive up the risk, but there are almost certainly things which tend to reduce the risk.
We've got some research going on at the moment with the Protective Security Lab, which is based on the hypothesis that nobody knows how to measure insider risk, and nobody knows how to measure the effectiveness of personnel security. It's been going for a year and a half now. And both empirical, and literature review. And so far, the hypothesis is holding up quite well.
Question 11: Security Vetting: Streamlining the Process
1:32:39 – SW: This question is about the security vetting process: given how long it can take to clear individuals, and with recent concerns raised by the National Audit Office, is there anything from your research or experience that might suggest how the process could be streamlined? Are we perhaps placing too much emphasis on initial selection rather than ongoing assessment?
1:33:01 – PM: Absolutely, yeah. I mean you pressed the start button here... In my personal opinion, but it's not just my opinion; I know. The national security vetting system is painfully slow. And so it encourages a view in a lot of the organisations that have to do it, that what they see as vetting - which may or may not be the same thing as personnel security, depending on how you misuse the word - is just an impediment. I mean, stuff is being done to improve it. A lot of it is just down to process. It's old IT systems and so on.
I have listened to lots of organisations who seem to have the view that it would be better if they could just get rid of it, because it gets in the way of getting the job done, and it's a barrier to diversity and so on. As though they somehow lost sight of the purpose of it. Which I can kind of understand because it's a real nuisance in a lot of organisations.
So I think a lot could be done to improve the performance - so how quickly it's done. The NAO report is all about performance. It's how quickly can you clear somebody, and how quickly can you do a renewal, and so on. It doesn't actually say very much about the effectiveness; does it actually make a difference? I think the jury's out on that.
And your other point, yes, I agree. I think generally speaking, my experience has been - less so in government than the private sector - there's a lot of emphasis placed on pre-employment screening. Which is what a lot of people mean by vetting. And then nothing much happens after that. So people are checked out during the recruitment process. Are they known to be a bad person? Do they have, for example, a significant criminal record? And if the answer's ‘no’, there's a presumption that they must be trustworthy now and forevermore. They can come in, and there you go.
In theory, there is this sort of personnel security ongoing, in trust, after care, whatever you call it... All the things you do after somebody joins an organisation. That should be where most of the effort is focused, because we know empirically that that's where most of the risk sits. Because they mostly go wrong after they join, not before.
People do try and join organisations with preexisting malign intentions. So pre-employment screening, don’t get me wrong, is really important. It acts as a deterrent. It puts off people from trying. But if that's all you'll do, you're asking for trouble as an organisation. And I think a lot of organisations do shortchange that bit. A lot of it is to do with management and leadership. There's a bit of technology that can help as well.
The Holy Grail in all of this is what's called Continuous Evaluation. Where essentially there's a sort of rich stream of data somehow magically plucked out of the ether, and from other people, that's continually assessed. And as stuff happens, that flags up individuals as a risk, it happens, more or less in real time. That that would be, in principle, a good thing to do. In practice, really difficult. And so the system, still broadly, is a sort of snapshot every so many years. The lower levels of government vetting, the policy astonishingly says that it only has to be done every ten years. Which is, in my view, kind of a bit mad. I can understand why for practical reasons, and some organisations will refresh more frequently than that. But obviously somebody can have a whole career in ten years, and a lot can happen to them. So whatever the vetting said ten years ago may not be true now.
1:37:21 – RS: I wonder also whether vetting sends a signal to the individual about this being a serious job? So actually it links in with identity. And certainly we were finding that in our CREST study, that people looked at this. It was a badge that they wanted because it took effort and took a long time. So I think there probably is a sweet spot between giving sufficient time for it to gather the evidence, and for the individual to feel that that's part of their journey into the organisation. That it’s a valued one.
So I think there might be actually some merit in having a period of time that is about it. This is a serious job that has serious stuff associated with it. And then you, as the individual, thinking about that. So it's kind of two way. Rather than the organisation, necessarily, just grabbing the individual. About thinking, if you've got that ten-year journey between vettings, a lot can happen. How do you feel about that?
1:38:22 – PM: I think that's a really important point. And it has sometimes occurred to me that we don't make enough of that. If somebody has got through a rigorous vetting process, particularly at the higher levels, somebody had a pretty good look at them and determined that they are trustworthy to quite a high degree. We should value that. I think as individuals, if people who've been through that process should, should value that a bit more. You know, ‘this serious organisation thinks I'm pretty trustworthy’, but so often it is just seen as a kind of tedious bureaucratic process, a hoop that people have to jump through. Which is a shame. I think it's missing out on some of the value of it.
1:39:06 - RS: And from a staff perspective. It will also, you know, if you're the impediment to you joining, or the process that is checking that gatekeeping to give you your ‘badge of honour’, then that's a very different perspective for those staff members, isn't it?
Question 12: AI Insiders: What Are They?
1:39:22 – SW: And finally, earlier you mentioned the idea of AI insiders. Could you expand on what you meant by that?
1:39:31 – PM: Yeah. Again, I'll try and be brief. I'm doing some work with this, with colleagues in the Alan Turing Institute. So it's quite a specific issue. It says essentially, as increasingly functions and activities that people used to do are being handed over to intelligent machines, AIs. Some of the things that human insiders used to do in principle, and actually, we're getting the first glimmers of in practice, the AI could do as well. So just like a human insider, an AI could, if you like, sort of go wrong and start causing harm because of something that's going on inside the AI. Or it could go wrong and start causing harm because it's being got at by an external threat actor, a hostile foreign state, or an organised crime group. Probably, at the moment, mostly a hostile foreign state.
And, unsurprisingly, threat actors, as always, they're kind of trying to be ahead of the game. Of course, they're thinking about this. They're looking at, like any technology, it's dual use. Any technology, apart from nuclear weapons, is dual use. And there's stuff going on already where AIs are being got at by external threat actors for various reasons.
So there's this emerging issue, or risk of, insider entities that are not humans. We've done some work around: so how do you think about it? What kind of concepts and principles might apply? And actually, to our genuine surprise, we find a lot of the concepts and principles that you would find useful to think about human insiders, a lot of them work pretty well with artificial insiders as well. The extent to which they're externally influenced, for example, and so on.
...to our genuine surprise, we find a lot of the concepts and principles that you would find useful to think about human insiders, a lot of them work pretty well with artificial insiders as well.
And a lot of the things that people kind of fret about with AIs, ‘oh their biased’, well join the club. I mean we're all biased, aren't we? There's a whole field of psychology about human cognitive and psychological biases. So don't be surprised if machines are biased as well. Particularly, as all they know is what they've learned from us. And so on. So I think we will see more examples of insider entities as distinct from purely human insiders. Watch this space...
1:42:43 – RS: Just to add to that, that what concerns me around the AI space is the human interaction, where humans kind of back off and go, ‘oh, too difficult’, or fail to engage, to interrogate in a particular way. And certainly working with Karen Renaud has really opened my eyes to people saying the system doesn't go wrong. And actually she has lots of evidence of the system absolutely going wrong. And people feeling confident to go, ‘no, no, it's a system thing’. So I think having the human as part of a solution, rather than just devolving to computers is a productive way forward. Both in terms of keeping that skill set for humans so that they can detect the problems, rather than going, ‘it's just too difficult.’
1:43:32 - PM: That's a really interesting point. There's a sort of comforting belief that a distinction can be drawn between decision support tools, and automated decision making. And it's okay to have decision support tools. But it's not okay for the decision to be automated. I think increasingly that line has got very blurred because if your decision support tool is really good... And AIs are just better than humans at doing some things. I mean, on specialist tasks, including things like some forms of medical diagnosis, they're not only much faster, they're better, they're more accurate. So why wouldn't you trust the machine? The danger is that it then effectively becomes the decision maker. I don't know the answer to that.
There's a lot of hot air being blown around at the moment about accountability as well. you can't let machines make decisions because they can't be held accountable. I'm not sure I'm entirely convinced that that entirely flows logically one from the other.
So, I think there's a real problem at the moment with understanding the similarities and differences between AIs. Particularly as they become so spookily able to mimic human behaviour. Though they do it in a completely different way. And opinions differ amongst the experts, I've discovered. I'm not an AI expert, but I've learnt a little bit about it. I'd say the majority of them, for example, do not believe that current AIs are in any sense conscious. But they behave as though they are. They pass the Turing test; if you didn't know it was an AI, you'd think it was a human.
1:45:33 – SW: Okay, before we wrap up, are there any final questions from the audience?
Wrap up
1:45:42 – SW: Closing Remarks:
Alright. Just to say, personally, I found today's discussion absolutely fascinating, and I hope it translates well once the recording is published. A huge thank you again, to everyone for coming. And especially to our speakers, Ros and Paul, for sharing such thoughtful, and thought-provoking insights. If you'd like to continue the conversation informally, we've got coffee and cake available here. So, feel free to stick around to chat to our speakers.
And finally, thanks to Becky and Shyavi for organising today's session. Thank you all very much.
Academic Publications
The Enabling Role of Internal Organizational Communication in Insider Threat Activity – Evidence From a High Security Organization
This paper explores the role of internal communication in one under-researched form of organizational crisis, insider threat – threat to an organization, its people or resources, from those who have legitimate access. In this case study, we examine a high security organization, drawing from in-depth interviews with management and employees concerning the organizational context and a real-life incident of insider threat. We identify the importance of three communication flows (top-down, bottom-up, and lateral) in explaining, and in this case, enabling, insider threat. Derived from this analysis, we draw implications for communication and security scholars, as well as practitioners, concerning: the impact of unintentional communication, the consequences of selective silence and the divergence in levels of shared understanding of security among different groups within an organization.
(From the journal abstract)
Rice, C., & Searle, R. H. (2022). ‘The Enabling Role of Internal Organizational Communication in Insider Threat Activity – Evidence From a High Security Organization.’ Management Communication Quarterly.
Read more
Searle, R. H., & Rice, C. (2024). Trust, and high control: an exploratory study of Counterproductive Work Behaviour in a high security organization. European Journal of Work and Organizational Psychology, 34(3), 392–402. https://doi.org/10.1080/1359432X.2024.2344870
Gustafsson, S., Gillespie, N., Searle, R., Hope Hailey, V., & Dietz, G. (2020). Preserving Organizational Trust During Disruption. Organization Studies, 42(9), 1409-1433. https://doi.org/10.1177/0170840620912705
Knoll, M., Fida, R., Marzocchi, I., Searle, R., Connelly, C. and Ronchetti, M. (2025), Quiet Workaholics? The Link Between Workaholism and Employee Silence and Moral Voice as Explained by the Social-Cognitive Theory of Morality. J Organ Behav, 46: 745-764. https://doi.org/10.1002/job.2867
Rice, C., & Searle, R. H. (2022). ‘The Enabling Role of Internal Organizational Communication in Insider Threat Activity – Evidence From a High Security Organization’. Management Communication Quarterly, 36(3), 467-495. https://doi.org/10.1177/08933189211062250
Fida, R., Skovgaard-Smith, I., Barbaranelli, C., Paciello, M., Searle, R., Marzocchi, I., & Ronchetti, M. (2024). The suspension of morality in organisations: Conceptualising organisational moral disengagement and testing its role in relation to unethical behaviours and silence. Human Relations, 0(0). https://doi.org/10.1177/00187267241300866
Copyright Information
As part of CREST’s commitment to open access research, this text is available under a Creative Commons BY-NC-SA 4.0 licence. Please refer to our Copyright page for full details.
IMAGE CREDITS: Copyright ©2025 A.Armistead / CREST (CC BY-SA 4.0)




