Adrian Cherney outlines five methods of outcome evaluation of countering violent extremism (CVE) programmes.

One consistent argument made in the literature is the need for more investment in the evaluation of CVE programmes (Braddock 2019; Sydes, et al 2023). While it is inaccurate to state that few CVE programmes have been evaluated – see the section on Read More – the main focus by governments and agencies has been to implement and deliver programmes, with evaluation often an afterthought. It needs to be acknowledged though that the evaluation of CVE programmes is not without its challenges and while we might want to use the most rigorous methods (like randomised control trials – see below), it may not always be possible to evaluate CVE initiatives according to the “gold standard” for programme evaluation (Braddock 2019; Cherney et al 2018). Some type of evaluation is better than no evaluation at all, and a range of pragmatic decisions need to be made on the feasibility of adopting a particular method of evaluation (funding; expertise; time; data access). While a range of different approaches and terminology have been proposed in the literature there are two main approaches to programme evaluation:

  • Outcome evaluation: identifying the impact of an intervention.
  • Process evaluation: assessing whether an intervention was delivered/ implemented as designed and recommended.

This review is concerned with outlining options for outcome evaluation.

...the evaluation of CVE programmes is not without its challenges and while we might want to use the most rigorous methods, it may not always be possible.

What are options for an outcome evaluation?

Any criteria chosen to measure programme outcomes should reflect the objectives of an initiative. This will be determined by the type of programme being implemented such as whether it is a CVE programme concerned with primary, secondary or tertiary prevention. The approach chosen to evaluate the impact of a programme, for example assessing outcomes relating to changes in violent extremist attitudes and behaviours, needs to consider the associated strengths and weaknesses of adopting a particular method. These methods are summarised below.

1: Post-intervention evaluation

This method involves collecting data from programme participants at a single point in time. For example, this might involve interviewing programme participants about whether their skills, knowledge and behaviour has changed or altered since participating in a programme or surveying target groups about their awareness or exposure to a publicity campaign since its launch. This is a straightforward approach to evaluation. However, given data collection only occurs at a single point in time, one must be careful about making claims relating to effectiveness and that the intervention being evaluated was the cause of any observed change (referred to as causation). This is because there is no pre or baseline data collection or a comparison group.

2: Pre-/post-intervention evaluation

This method entails collecting data from a single intervention group at two points in time: before the programme starts and after the programme finishes or is completed. The base-line data (pre-programme implementation) and the post intervention data should measure the same problem, behaviour or attitude the intervention is aiming to change. This method is better than option one above, in that you can demonstrate change by having a before and after measure but given there is no comparison or control group claims about impact and causation need to be made with some caution.

3: Longitudinal comparative case analysis

This method involves examining individual client progress over time (i.e., across multiple dated observations of participation in an intervention) and is particularly applicable to case managed interventions aimed at risk individuals and convicted terrorists (Cherney & Belton, 2019; 2023). It involves examining data concerned with individual progress as it pertains to specific clients (cases) and their intervention goals and comparing markers of change overtime and across clients. This can involve various indicators of change such as those relating to risk factors and prosocial behaviours, compliance to intervention activities, non-compliance and setbacks. It can be used when it is not possible to have a control or comparison group. While specific to case managed interventions, it is a more rigorous approach compared to option one and two above, however one still needs to be mindful about making claims of causation as it is very narrow in focus.

4: Pre-/post-intervention evaluation with a comparison group

Similar to the pre-/post intervention design (i.e., there is the collection of base-line data and post intervention measures), the key difference here is that you have a comparison group. For instance, many CVE programmes target the behaviours or beliefs of individuals or groups of people e.g., youth. This approach would involve the collection of data on the behaviour or beliefs from the target group who are subject to the intervention prior to and following its implementation and also from a group who does not receive the intervention but is similar in key attributes or characteristics in background (ethnicity, race, age, gender and socio-economic status) to the intervention target group. Then you compare outcomes between the two groups (behaviours /experiences), and this method can provide some confidence in concluding whether the intervention has had an impact or not. Ideally you want to see a change in the intervention group and no change in the comparison group. This option is more rigorous than options one, two and three above, but the one challenge is ensuring the non-intervention group is sufficiently similar to individuals who received the programme so meaningful comparisons can be made.

5: Pre-/post-intervention evaluation with a control group

This method involves randomly assigning your pool of programme participants to either participate in the programme (referred to as the treatment group) or to not participate in the programme (referred to as the control group). This is often termed the experimental method or randomised control trial (RCT). Data on outcome measures (behaviours and attitudes to be changed) is collected prior, during and after the programme is completed or ends from the treatment and control group. This method is regarded as more robust compared to options one to four above. This is because the treatment and control group are selected from the applicable targets of the intervention, and allocation to either group is random, so individuals have equal chance of being assigned, thus helping to reduce any possible bias in allocation to the intervention and non-intervention sample. However, there are challenges in adopting this type of approach such as the logistics around random assignment, and agencies may not feel comfortable with having an individual assessed as a violent extremist or terrorist risk being allocated to a control group where no intervention occurs. Ideally you want to see change in the treatment group and no change in the control group. RCTs are regarded as more robust in demonstrating causation between the intervention and observed outcomes (Braddock 2019).

Conclusion

The five methods of outcome evaluation outlined here all have their pros and cons. Ideally evaluation should be planned ahead of the task of programme implementation so methods and processes can be put in place for the collection of data on outcome measures before an intervention is delivered. No matter which approach is adopted to assess programme outcomes, planning is central to good evaluation practice.

Read more

Braddock, K. (2019). A brief primer on experimental and quasi-experimental methods in the study of terrorism. International Centre for Counter-Terrorism Policy Brief. The Hague. https://bit.ly/4bgjNVe

Brouillette-Alarie, S., Hassan, G., Varela, W., Ousman, S., Kilinc, D., Savard, É. L., ... & Pickup, D. (2022). Systematic review on the outcomes of primary and secondary prevention programs in the field of violent radicalization. Journal for Deradicalization, (30), 117-168. https://bit.ly/3EWcoyl

Cherney, A. & Belton, E. (2023). The evaluation of case-managed programs targeting individuals at risk of radicalisation. Terrorism and political violence, 35(4), 846-865. https://bit.ly/3QyHXR4

Khalil, J. & Zeuthen, M. (2016). Countering violent extremism and risk reduction: A guide to programme design and evaluation. Royal United Services Institute. https://bit.ly/3Qzs9xx

Lewis, J., Marsden, S. V. & Copeland, S. (2020). Evaluating programmes to prevent and counter extremism. CREST. https://bit.ly/438w77W

Lewis, J., Marsden, S., Cherney, A., Zeuthen, M., Rahlf, L., Squires, C. & Peterscheck, A. (2024). Case management interventions seeking to counter radicalisation to violence and related forms of violence: A systematic review. Campbell systematic reviews, 20(2), e1386. https://bit.ly/4bpnVCj

Malet, D. (2021). Countering violent extremism: assessment in theory and practice. Journal of policing, intelligence and counter terrorism, 16(1), 58-74. https://bit.ly/4gXCoGP

Mazerolle, L., Eggins, E., Cherney, A., Hine, L., Higginson, A. & Belton, E. (2020). Police programmes that seek to increase community connectedness for reducing violent extremism behaviour, attitudes and beliefs. Campbell systematic reviews, 16(3), e1111. https://bit.ly/4hZo9Tm

Thompson, S. K. & Leroux, E. (2023). Lessons learned from dual site formative evaluations of Countering violent extremism (CVE) programming co-led by Canadian police. Journal of Policing, Intelligence and Counter Terrorism, 18(1), 24-42. https://bit.ly/4kjtZjJ

Williams, M. J. (2020). Preventing and countering violent extremism