Community-based violence prevention (CVP) programmes often receive seed funding from government bodies or philanthropic organisations, with the expectation that they will achieve long-term sustainability beyond initial funding cycles. Most discussions of sustainability focus on securing new funds. While critical, funding is not the sole determinant of a programme's enduring success. Insights from implementation science, an interdisciplinary field focused on translating evidence-based interventions into real-world practical effectiveness, reveal additional factors that increase the likelihood a programme endures. Understanding these factors has profound implications for evaluating CVP programmes.
What factors predict successful sustainment of community-based violence prevention programmes?
Sustaining CVP programmes depends on several interrelated factors that can be categorised into structural, process-related, and outcome-oriented elements. Figure 1 summarises these factors.

Figure 1. Sustainability Factors
Structural Factors
Structurally, sustainability depends on stable funding, sound governance, and institutional support. Programmes with diverse funding sources are more resilient because they are not overly dependent on any single grant or donor. This financial diversity fosters stability, even when initial funding concludes. Programmes embedded within their communities that address unique local needs and reflect local values are more likely to garner financial and social support for sustainment. Good governance ensures programmes stay on track with their strategic goals and mission. Finally, institutional support from community groups and partners reinforces the importance of the programme long-term.
Process-Related Factors
Process-related factors include effective leadership, stakeholder engagement, community alignment, and continuous learning. Strong leadership provides vision, advocates for the programme, and ensures alignment with both community needs and best practices in violence prevention. Engagement from community leaders, law enforcement, healthcare providers, school staff and administrators, and residents enhances a programme’s credibility and relevance. These stakeholders can advocate for the programme, promoting its benefits to the broader public and influencing policy support. Continuous learning allows programme administrators and other stakeholders to monitor effectiveness, adapt to changing environments, and improve over time.
Outcome-Related factors
Outcome-related factors focus on measurable impact, adaptability, and visibility. Programmes demonstrating clear, positive outcomes, such as recidivism reduction or increased access to mental health services, are more likely to gain support from funders and policymakers. Adaptability is equally crucial—programmes must adjust to changing social, economic, or political conditions to maintain relevance and effectiveness. Programme champions within the community and among stakeholders raises awareness of the programme’s benefits, fostering a supportive environment that advocates for its continuation.
What lessons can we draw from implementation science to inform the evaluation of community-based violence prevention programmes?
Lesson 1: Evaluation of structures and processes is as critical to long-term sustainability as the evaluation of outcomes.
Programme evaluations are often more concerned with the effect an intervention had on an outcome of interest (outcome evaluation) than on how the programme achieved that outcome (process evaluation). Sustainability research suggests that questions typically of concern in process evaluations, such as the procedures and resources needed to achieve those outcomes, are just as important to long-term success.
Lesson 2: Implementation science research into sustainability can inform the collection of relevant data.
The implementation science literature provides specific process and structural dimensions empirically found to be related to programme sustainability. Thus, when evaluating CVP programmes, assessing these factors can enhance funders’ and other stakeholders’ understanding of the long-term potential of a programme. Survey-based data collection methods, common in the implementation science literature, provide an opportunity to quantitatively assess the likelihood of programme sustainment. By tracking improvements in these factors over time, funders and other stakeholders can evaluate whether a programme is on the path to achieving sustainability. A systematic review of sustainability surveys, published in the journal Implementation Science in 2022, provides a helpful starting point for collecting these data.
Research suggests that adaptable programmes are more likely to be sustained.
Lesson 3: Evaluating programmes for continuous improvement enhances the likelihood of sustainability.
In addition to identifying metrics, the academic literature examining sustainability suggests that the act of collecting evaluation evidence is predictive of long-term success. One mechanism is obvious—CVP programmes that produce a measurable impact on outcomes of interest to their stakeholders are more likely to receive additional funding. However, programme evaluation benefits sustainability in other, less intuitive ways. For example, data are needed to develop feedback loops for continuous learning. Collecting meaningful data as part of systematic programme evaluation creates information to inform learning, which can be used to continuously adapt the programme. Research suggests that adaptable programmes are more likely to be sustained because they create structures capable of responding to emerging challenges and shifts in community needs.
Lesson 4: Evaluation capacity is a key structural factor for long-term programme success.
Building capacity for evaluation is a precondition for systematically collecting data to support continuous improvement. Evaluation capacity involves creating policies, processes, and resources that support data collection, analysis, and use of findings. A critical aspect of this capacity is staff that have the relevant knowledge, skills, and attitudes. Training staff in evidence-based practices, securing resources for evaluation, ensuring available technical assistance, and establishing robust data collection methods are all ways to build evaluation capacity.
Key Takeaways for Practitioners to Enhance Sustainability
- Diversify funding to achieve programme resilience.
- Root your programme in local needs and values to keep people invested.
- Regularly gather and review data to spot what is working and where you need to pivot, and to demonstrate results.
- Build evaluation into your programme from the start by building capacity—make it part of everyday operations, not an afterthought.
Read more
Cooper, B. R., Bumbarger, B. K. & Moore, J. E. (2015). Sustaining evidence-based prevention programs: Correlates in a large-scale dissemination initiative. Prevention Science, 16(1), 145-157. https://bit.ly/4h4pRBo
Hall, A., Shoesmith, A., Doherty, E., McEvoy, B., et al. (2022). Evaluation of measures of sustainability and sustainability determinants for use in community, public health, and clinical settings: A systematic review. Implementation Science. 17(81). https://bit.ly/3D8P13S
Guyadeen D, Seasons M. (2018). Evaluation theory and practice: Comparing program evaluation and evaluation in planning. Journal of Planning Education and Research, 38(1), 98-110. https://bit.ly/41ykUMT
Johnson, K., Hays, C., Center, H. & Daley, C. (2004). Building capacity and sustainable prevention innovations: A sustainability planning model. Evaluation and Program Planning, 27(2), 135-149. https://bit.ly/4iiiWps
Meyers, D. C., Durlak, J. A. & Wandersman, A. (2012). The quality implementation framework: A synthesis of critical steps in the implementation process. American Journal of Community Psychology, 50(3), 462-480. https://bit.ly/43cFsvy
Palinkas, L. A., Chou, C.-P., Spear, S. E., Mendon, S. J., Villamar, J. & Brown, C. H. (2020). Measurement of sustainment of prevention programs and initiatives: The sustainment measurement system scale. Implementation Science, 15, 71-85. https://bit.ly/4kkxc2L
Stalker, K. C., Brown, M. E., Evans, C. B., Hibdon, J. & Telep, C. (2020). Addressing crime, violence, and other determinants of health through community‐based participatory research and implementation science. American Journal of Community Psychology, 66(3-4), 392-403. https://bit.ly/3ER0SUW
Stirman, S. W., Kimberly, J., Cook, N., Calloway, A., Castro, F. & Charns, M. (2012). The sustainability of new programs and innovations: A review of the empirical literature and recommendations for future research. Implementation Science, 7(1), 17-35. https://bit.ly/3CUCQb2
Van Meerkerk, I., Kleinhans, R. & Molenveld, A. (2018). Exploring the durability of community enterprises: A qualitative comparative analysis. Public Administration, 96(4), 651-667. https://bit.ly/4k8PaoR
Walker, A., Steele, S., Allen, M. & Arreola, N. (2023). Prevention program sustainability and associated determinants: A literature review, Version 1.0. Reports, Projects, and Research. 53. https://bit.ly/4hRPI0M
Copyright Information
As part of CREST’s commitment to open access research, this text is available under a Creative Commons BY-NC-SA 4.0 licence. Please refer to our Copyright page for full details.
IMAGE CREDITS: Copyright ©2025 R. Stevens / CREST (CC BY-SA 4.0)