
Evaluation
Welcome to this issue of the CREST Security Review which focuses on counterterrorism and counter-extremism evaluation. The articles reflect the challenges and opportunities of understanding ‘what works’ in this field, and describe recent developments in what is a vibrant field of research and practice.
Articles cover macro questions, such as Erika Brady’s perspective on key considerations when trying to understand the impact of national and international counterterrorism strategies. Organisational-level processes are addressed in Daniel Koehler and his colleagues’ article on the quality control and management practices developed by Competence Center Against Extremism in Baden-Wuerttemberg Germany (konex). Whilst efforts to understand individual-level dynamics are covered by Sian Watson and Jonathan Kenyon’s piece on how to measure change in terrorism offenders.
We bring together different perspectives on evaluation, including practitioners such as Kiren Vadher who highlights the importance of evaluation for accountability, effective decision making, and identifying and managing risks. Matt Allen and Andrea Walker draw insights from the broader field of implementation science to examine the factors which influence the sustainability of community-based violence prevention programmes.
The place of gender equality and the Women, Peace, and Security agenda in programme evaluation is covered by Jessica White and Isabella Vogel’s analysis which argues that, as part of a commitment to gender mainstreaming, counterterrorism and military operations need to adopt an intersectional gender lens to fully understand the impact of interventions on all aspects of society.
Several articles focus on programmes to counter and prevent violent extremism. Irina van der Vet and Leena Malkki draw on their experience with the INDEED project to set out five principles programmers should foreground when trying to nurture an evaluation culture. Adrian Cherney outlines the pros and cons of five methods for evaluating the outcome of CVE programmes. Whilst Michael J. Williams and Tim Hulse share their experience of using shared metrics to develop ‘cross-project comparisons and portfolio-level insights’.
In an article on evaluation in Targeted Violence and Terrorism Prevention programmes, James Lewis and Sarah Marsden argue for a change in emphasis, to focus less on asking "what works" to understanding "how programmes work." Whilst Gordon Clubb examines the delicate balance between transparency, communication and evaluation, describing the potential pitfalls of overly transparent communication in sensitive security contexts. Bianca Slocombe and Rachel Monaghan describe the EVIL DONE framework for interpreting terrorist targeting, and Ardi Janjeva explains the important role evaluation can play in identifying and mitigating risks from generative AI.
Together the articles reflect the breadth and depth of research and practice in the field, highlighting the challenges that still need to be addressed, and the advances made to date.
Finally, addressing the broader aspects of security research, Nick Dale outlines the security risks associated with current vetting methods. For further research underpinning these articles and additional reading, refer to the ‘Read More’ section.
Sarah Marsden
Guest Editor, CSR.