RemNote Community
Community

Program evaluation - Core Foundations of Evaluation

Understand program evaluation’s purpose, major paradigms, and key evaluation models.
Summary
Read Summary
Flashcards
Save Flashcards
Quiz
Take Quiz

Quick Practice

How is program evaluation defined in terms of its process and purpose?
1 of 10

Summary

Program Evaluation: Definition and Approaches Introduction: What Is Program Evaluation? Program evaluation is the systematic process of collecting, analyzing, and interpreting information about a program, project, or policy to answer key questions about its performance. The fundamental purpose of evaluation is to help stakeholders—including funding organizations, policymakers, program implementers, and program participants—understand whether a program achieves what it claims to achieve. When you hear "program evaluation," two critical questions typically frame the work: Does the program work? (effectiveness) — Is the program achieving its intended goals? Is it worth the cost? (efficiency) — Does the program provide good value for the resources invested? Evaluators come from diverse disciplinary backgrounds including sociology, psychology, economics, social work, political science, and public administration. This diversity of perspectives enriches evaluation work, as professionals bring different expertise and ways of thinking about complex social programs. Importantly, evaluations use both quantitative methods (numerical data, statistical analysis) and qualitative methods (interviews, observation, narrative analysis), often combining both to get a complete picture. The Three Main Evaluation Paradigms A paradigm is a fundamental way of thinking about and approaching a problem. In evaluation, there are three major paradigms that represent different philosophies about what counts as valid knowledge and how evaluation should be conducted. Understanding these paradigms is crucial because they shape how evaluators design studies, collect data, and interpret findings. The Positivist Paradigm The positivist paradigm relies on objective, observable, and measurable data. Think of this approach as focused on "what can we measure and count?" This paradigm emphasizes: Quantitative evidence and numerical data Observable outcomes that can be measured consistently Reducing bias through standardization and controls Testing whether programs achieve their stated objectives Within positivist evaluation, there are several assessment dimensions that evaluators typically examine: Needs Assessment: Do people actually need what the program offers? Program Theory Assessment: Is the program's underlying logic sound? (Does the theory of how it should work make sense?) Process Assessment: Is the program being implemented as designed? Impact Assessment: Does the program actually produce the intended outcomes? Efficiency Assessment: What is the cost per participant? Are there ways to achieve the same results more cheaply? The positivist approach is particularly useful when you need clear, measurable evidence of whether a program works, making it common in public health, education, and government accountability contexts. The Interpretive Paradigm The interpretive paradigm takes a fundamentally different approach. Rather than starting with measurement and numbers, it asks: "What are the experiences, perspectives, and understandings of the people involved?" This paradigm emphasizes: Understanding stakeholders' perspectives, experiences, and expectations Recognizing that people's interpretations and meanings matter Using qualitative methods like observation, interviews, and focus groups Integrating both qualitative and quantitative data when useful The interpretive evaluator acts as a guide who helps stakeholders understand their program from the inside. Instead of asking "Does this program reduce test scores by 5 points?" (positivist), an interpretive evaluator might ask "How do students, teachers, and parents experience this program? What do they think is working or not working?" This approach is particularly valuable when programs aim to improve quality of life, foster understanding, or support personal growth—outcomes that are hard to capture with numbers alone. The Critical-Emancipatory Paradigm The critical-emancipatory paradigm goes beyond understanding—it aims for social transformation and empowerment. This paradigm is based on action research and asks: "How can evaluation help empower communities and change oppressive structures?" This paradigm emphasizes: Participatory evaluation where community members are active partners in the evaluation process, not just subjects being studied Activism aimed at addressing power imbalances and social inequities Understanding and challenging structural barriers that limit program effectiveness Creating knowledge that leads to action and social change This approach has been particularly influential in international development and work in developing countries, where external evaluators often lack deep understanding of local contexts and community members are best positioned to identify what needs to change. A key distinction here: while the positivist evaluator aims to be neutral and objective, and the interpretive evaluator aims to understand different perspectives, the critical-emancipatory evaluator explicitly takes a stance to challenge existing power structures and advocate for marginalized groups. A Critical Point: Context and Politics in Evaluation Here's something that can be tricky to understand: evaluation is never purely neutral or objective, regardless of which paradigm is used. All evaluation work occurs within specific socio-political contexts—the existing power structures, ideologies, and political environments of a community or organization. This matters because: Evaluation results can be used to advance certain ideological, social, or political agendas The questions chosen for evaluation reflect values and priorities The interpretations of findings are influenced by the evaluator's own perspectives and the stakeholder's interests A program showing positive results in one context might show different results in another context Understanding this doesn't mean evaluation is useless or that findings can't be trusted. Rather, it means skilled evaluators acknowledge the role of context and work to ensure findings are fair, accurate, and useful regardless of how they might be used politically. Key Evaluation Frameworks One of the most influential evaluation frameworks is the CDC Six-Step Framework, developed by the Centers for Disease Control and Prevention. This framework, outlined in the Framework for Program Evaluation in Public Health (1999), provides a structured approach to evaluation that typically includes these steps: Engage Stakeholders — Identify and involve all relevant parties in planning the evaluation Define the Program — Clarify what the program is and what it intends to accomplish (often through logic-model development) Gather Credible Evidence — Collect data using appropriate methods Judge Feasibility, Propriety, and Accuracy — Apply standards to assess the quality and appropriateness of the evaluation Ensure Utility — Make sure the evaluation provides useful information Share Findings and Dissemination — Communicate results to stakeholders and use findings to improve the program This framework reflects good evaluation practice by emphasizing stakeholder involvement, clear program definition, rigorous evidence collection, and practical use of findings. <extrainfo> Additional Evaluation Types and Approaches Beyond paradigms and frameworks, evaluations can be categorized by their purpose: Formative Evaluation: Conducted during program development to improve the program Summative Evaluation: Conducted at the end to determine overall effectiveness Utilization-Focused Evaluation: Emphasizes designing evaluation around how findings will actually be used Developmental Evaluation: Supports programs in complex, changing environments by providing real-time feedback Theory-Driven Evaluation: Tests and refines the program's underlying theory of change Realist-Driven Evaluation: Focuses on understanding what works, for whom, and under what circumstances Principles-Focused Evaluation: Evaluates adherence to core principles rather than just outcomes Context, Input, Process, Product (CIPP) Model Evaluation: Assesses all stages of a program from planning through completion While these distinctions are useful, they're less central to foundational understanding than the three paradigms and frameworks discussed above. </extrainfo>
Flashcards
How is program evaluation defined in terms of its process and purpose?
The systematic collection, analysis, and use of information to answer questions about projects, policies, and programs.
What does it mean for a program evaluation to assess effectiveness?
Determining whether a program does what it intends to do.
What does it mean for a program evaluation to assess efficiency?
Determining whether a program provides good value for money.
What are the core emphases of the Positivist Paradigm in evaluation?
Objective, observable, and measurable aspects, emphasizing quantitative evidence.
What is the primary goal of the Interpretive Paradigm in program evaluation?
To understand stakeholder perspectives, experiences, and expectations before judging program merit.
What is the underlying basis and aim of the Critical-Emancipatory Paradigm?
Action research aimed at social transformation and empowerment.
In what geographical context is the Critical-Emancipatory Paradigm especially useful?
Developing countries.
What are the various types of program evaluation listed in the text?
Utilization-Focused Evaluation CIPP Model (Context, Input, Process, Product) Formative Evaluation Summative Evaluation Developmental Evaluation Principles-Focused Evaluation Theory-Driven Evaluation Realist-Driven Evaluation
Which organization developed the six-step framework for public-health program evaluation in 1999?
The Centers for Disease Control and Prevention (CDC).
What specific developmental tool is included in the CDC's framework for program evaluation?
Logic-model development.

Quiz

Which of the following questions is an evaluator likely to help answer?
1 of 12
Key Concepts
Evaluation Paradigms
Positivist paradigm
Interpretive paradigm
Critical‑emancipatory paradigm
Evaluation Models and Frameworks
Utilization‑focused evaluation
CIPP model (Context, Input, Process, Product)
CDC Six‑Step Framework
Types of Evaluation
Formative evaluation
Summative evaluation
Developmental evaluation
Program evaluation