How can we compare a one-off awareness-raising workshop in a school with a multi-session bystander intervention training for professionals? Efforts to measure P/CVE outcomes have long been hampered by disparate metrics and frameworks that prohibit cross-project and portfolio-level evaluation. Researchers and practitioners have repeatedly lamented this problem (Williams and Kleinman, 2013; Lewis et al., 2020), as long ago as the 2013 ‘Symposium on Measuring the Effectiveness of CVE Programming’, hosted by the Government of Canada.
Shared metrics provide a tailored yet replicable approach to evaluation: offering local relevance while enabling both cross-project comparisons and portfolio-level insights.
Shared metrics represent an effective mechanism for resolving this problem of comparing ‘apples to oranges,’ and have been recognised good practice for cross-project and portfolio-level evaluations (European Union & United Nations, 2023). The use of shared metrics has also been tested in both the UK and New Zealand, through the present authors’ evaluations of the Mayor of London’s Shared Endeavour Fund and the New Zealand Government’s P/CVE Fund.
This article illustrates the advantages of a shared metrics approach and describes their ongoing use in the Shared Endeavour Fund since 2021.
Comparing ‘Apples to Apples’: The Benefits of Shared Metrics
The primary advantage of a shared metrics approach is that similar project types, within a broader portfolio of projects, utilise the same set of measurement instruments. This allows comparisons between projects that have common objectives, as well as aggregation of their results, to determine the overall impact of a portfolio.
By employing a well-formulated shared set of measurement instruments, portfolio managers can ensure that rigorous, consistent, and independent evaluation is conducted at both the project- and portfolio-level regardless of the evaluation capacity and expertise of individual project implementers. However, to avoid a ‘cookie cutter’ approach to evaluation, such instruments must be assigned to projects from a suite of prospective options based on:
- Their relevance and appropriateness per project.
- Consensus between evaluators and project implementers.
- Sufficiently customisable instruments that can be tailored to each initiative (e.g., referring to the specific threats addressed by a given project).
Equally, when comparing the results of these metrics, evaluators must take project fidelity into account, as differences between programming scope and delivery can significantly influence outcomes.
Another advantage of collecting aggregable data is that it inherently lends itself to mapping the composition of a project portfolio and identifying which areas of programming are adequately represented and where gaps remain. Metrics can be developed to track a wide range of factors, including extremist ideologies, identity-based harms, prevention approaches, target audiences, geographic locations and outcomes. By building a comprehensive picture of these factors, managers can make targeted adjustments to ensure that a portfolio aligns with their strategic priorities and the evolving threat landscape.
Finally, by streamlining evaluation processes and providing project implementers with rigorous, easy-to-use data collection tools, shared metrics can help overcome limitations in implementers’ evaluation capacity and expertise. In turn, this allows project implementers to focus their efforts on delivery while enhancing their ability to demonstrate their effectiveness both to funders and other key stakeholders. Over time, this approach can serve to build a culture of evaluation among implementers by strengthening their confidence and capacity in this area.
Shared Metrics in Practice: The Shared Endeavour Fund
The Mayor of London’s Shared Endeavour Fund is a prevention funding scheme for civil society organisations managed by the London Mayor’s Office for Policing and Crime (MOPAC) and administered by Groundwork London. It offers grants of £25,000 to £100,000 for seven-month primary and secondary prevention projects addressing intolerance, hate, extremism or terrorism in London (MOPAC, 2024).
The Fund is designed to fill an increasingly recognised gap in whole-of-society approaches to preventing extremism: funding and support for civil society (Intelligence and Security Committee of Parliament, 2018; MOPAC, 2019). By providing such resources, the Fund seeks to empower civil society organisations to act as more effective prevention partners for government: leveraging their unique access to, knowledge of, and credibility among local communities.
Shared Endeavour Fund projects are expected to contribute towards one or more of the Fund’s four priority themes: raising awareness; building psychosocial resilience; promoting prosocial behaviours; and strengthening prevention capabilities. First launched in 2020, the Shared Endeavour Fund began its fifth round of funding in September 2024. Over the past four years, the Fund has delivered more than £3,000,000 in grants, for 96 projects, reaching over 147,000 Londoners. During this time, the portfolio has covered a wide range of extremist ideologies, identity-based harms and related prevention topic areas.
Shared Metrics Methodology
Since 2021, the Shared Endeavour Fund has employed a shared metrics methodology using a mixed method approach that integrates qualitative and quantitative data to evaluate project implementation (i.e., fidelity) and both project- and portfolio-level outcomes (i.e., effectiveness). This dual strategy ensures a robust and consistent assessment of project activities while accommodating a diversity of project types and providing greater understanding of overall portfolio impact.
Fidelity
Project fidelity is assessed independently by two evaluators and compares project plans and reporting against a bespoke rating rubric covering the four domains bulleted below. The information produced by this process is designed not only to assess implementation quality and consistency with planned outputs, but also to provide a comprehensive overview of the portfolio’s composition year to year.
- Project activities and reach: Compares actual versus planned beneficiary numbers and outputs.
- Beneficiary targeting and selection: Evaluates the appropriateness and evidence-based selection of target groups.
- Quality of implementation: Measures beneficiaries’ attitudes towards the activities in which they participate.
- Data collection and reporting: Assesses adherence to sampling and survey administration protocols, with a focus on identifying and addressing inconsistencies.
Effectiveness
Effectiveness is assessed using a suite of survey instruments that measure key outcomes in beneficiaries’ knowledge, attitudes and behaviours. These instruments include bespoke but theoretically informed tools such as a ‘Bystander Intervention Readiness’ scale and a ‘Message Inoculation’ scale, along with peer-reviewed, ‘off-the-shelf’ instruments such as the ‘Measure of Tolerance’ (Hjerm et al., 2020) and the ‘Brief Resilient Coping Scale’ (Sinclair & Wallston, 2004).
Surveys are administered using a retrospective pre–post design. This approach simplifies implementation for project teams by requiring administration of only a single survey at the end of the project, instead of separate pre- and post-surveys (before and after project activities). A retrospective design also helps to mitigate some of the response biases associated with typical pre- post-surveys, thus improving the accuracy of the evaluation (Klatt & Taylor-Powell, 2005).[1]
Outcomes Measured by the Shared Endeavour Fund Evaluation | |||
Raise Awareness | Promote Prosocial Behaviours | Build Psychosocial Resilience | Strengthen P/CVE Capabilities |
- Extremism awareness and concern - Message inoculation - Digital literacy | - Civic engagement and responsibility - On- and offline hate incident reporting - Bystander intervention | - Emotional resilience - Self-esteem - Sense of meaning and purpose - Sense of belonging - Empathy and perspective-taking - Tolerance of difference | - Prevention capacity and expertise - Radicalisation reporting - Value of resources and tools |
Survey instruments are assigned to each project based on their stated aims and content, in consultation with the implementing organisation. This encourages buy-in and provides a second opportunity to verify the appropriateness of a given set of instruments. Project implementers are subsequently expected to administer those surveys to a predetermined sample size of their beneficiaries.
All surveys include quality control items (Maniaci & Rogge, 2014), to screen out inattentive responders prior to data analysis. Additionally, each survey instrument is tested for measurement reliability. Subsequently, statistical analyses are performed to test for significant differences between the pre- and post-survey responses: to track how beneficiaries’ knowledge, attitudes and behaviours have changed and to provide an overview of portfolio impact.
Considerations When Applying Shared Metrics
While shared metrics provide a useful and innovative approach for evaluating P/CVE portfolios, there are some factors that need to be considered when selecting this methodology. Most importantly, is there sufficient budget, lead time and expertise to establish the required evaluation systems?
...it's crucial that evaluation systems are established in advance of project delivery and embedded within the portfolio’s existing grant management and reporting structures.
Implementing a shared metrics approach across a portfolio of prevention projects requires a dedicated evaluation budget. This might not be immediately feasible for portfolio managers with limited financial flexibility, though much of the initial cost can be recouped by reallocating evaluation budgets otherwise earmarked for individual projects. Similarly, for optimal effectiveness, it is crucial that evaluation systems are established in advance of project delivery and embedded within the portfolio’s existing grant management and reporting structures. If these systems need to be retrofitted onto a portfolio cycle that is already in progress, it can hamper the evaluation’s ability to accurately track outcomes, thus reducing the reliability and comparability of its findings.
Project implementers may also be hesitant to adopt a shared metrics approach, especially if they belong to grassroots organisations that are not accustomed to rigorously evaluating their own initiatives. To overcome this reluctance, shared metrics should be tactfully introduced, and evaluators will likely need to be available throughout delivery to provide project implementers with guidance and support.
Beyond these standard considerations, there are also two limitations specific to the shared metrics approach employed to evaluate the Shared Endeavour Fund:
- Due to time and resource constraints this methodology is primarily geared towards assessing relatively short-term outcomes. However, that is not to say that longer-term data collection cannot be integrated into this approach to determine the extent to which observed effects endure over time. Assuming beneficiaries still can be accessed, survey instruments could be readministered at a later date to assess the trajectories of initial outcomes.
- Shared metric approaches tend to be inherently outcome-oriented: focusing on the magnitude of changes, rather than explaining how or why such changes occur. To achieve a deeper understanding of a portfolio’s impact and improve knowledge of the factors that drive or impede project results, this approach may be supplemented with qualitative methods such as observations, focus group discussions and interviews.
Conclusion
More than ever there is a need for governments and communities to come together to take an active role in P/CVE, but—to derive the greatest benefit from available funding—we need to know which projects are worth supporting. By utilising shared metrics, portfolio managers can more easily compare the fidelity and effectiveness of projects that they support and use those insights to make informed funding decisions. Likewise, supplying project implementers with easy-to-use data collection tools can aid them to refine their initiatives and communicate their impact to donors and other key stakeholders.
[1] Although response biases associated with typical pre-post surveys are well-documented, retrospective pre–post surveys are also susceptible to various forms of bias, including recall and social desirability biases. Such factors can influence respondents’ ability to reflect accurately on the prior state of their knowledge, skills, attitudes, behaviours, etc., potentially leading to over- or underestimation of programme outcomes. Typical pre-post designs may be most appropriate when both a potentially conservative estimate of programme outcomes is sought and a given measure (or set of measures) can be assessed objectively. (For a detailed discussion see Geldhof et al., 2018)
Read more
European Union & United Nations (2023). Compendium of good practices: Measuring results in counter-terrorism and preventing and countering violent extremism. https://bit.ly/4gWG5MT
Geldhof, G. J., Warner, D. A., Finders, J. K., Thogmartin, A. A., Clark, A. & Longway, K. A. (2018). Revisiting the utility of retrospective pre-post designs: The need for mixed-method pilot data. Evaluation and Program Planning, 70, 83–89. https://bit.ly/4kdGn4V
Hjerm, M., Eger, M. A., Bohman, A. & Fors Connolly, F. (2020). A new approach to the study of tolerance: Conceptualizing and measuring acceptance, respect, and appreciation of difference. Social Indicators Research, 147(3), 897–919. https://bit.ly/4bkmdC0
Intelligence and Security Committee of Parliament (2018). The 2017 Attacks: What needs to change? https://bit.ly/4kjAEuf
INTRAC (2021). M&E of Civil Society Funds. https://bit.ly/3EX5hWo
Klatt, J. & Taylor-Powell, E. (2005). Synthesis of Literature Relative to the Retrospective Pretest Design. AEA Connect. https://bit.ly/4iaCuvy
Lewis, J., Marsden, S. & Copeland, S. (2020). Evaluating Programmes to Prevent and Counter Extremism. CREST. https://bit.ly/438w77W
Maniaci, M. R. & Rogge, R. D. (2014). Caring about carelessness: Participant inattention and its effects on research. Journal of Research in Personality, 48, 61–83. https://bit.ly/3Qwwwto
MOPAC (2019). A Shared Endeavour: Working in Partnership to Counter Violent Extremism in London. https://bit.ly/4bljDvW
MOPAC (2024). Countering Violent Extremism. https://bit.ly/4hTgupl
Sinclair, V. G. & Wallston, K. A. (2004). The development and psychometric evaluation of the Brief Resilient Coping Scale. Assessment, 11(1), 94–101 https://bit.ly/3EWJdLq
Williams, M. J. & Kleinman, S. M. (2013). A utilization-focused guide for conducting terrorism risk reduction program evaluations. Behavioral Sciences of Terrorism and Political Aggression, 6(2), 102–146. https://bit.ly/4igXHEo
Copyright Information
Image credit: © arthead | stock.adobe.com