Program evaluation - Core Foundations of Evaluation
Understand program evaluation’s purpose, major paradigms, and key evaluation models.
Summary
Read Summary
Flashcards
Save Flashcards
Quiz
Take Quiz
Quick Practice
How is program evaluation defined in terms of its process and purpose?
1 of 10
Summary
Program Evaluation: Definition and Approaches
Introduction: What Is Program Evaluation?
Program evaluation is the systematic process of collecting, analyzing, and interpreting information about a program, project, or policy to answer key questions about its performance. The fundamental purpose of evaluation is to help stakeholders—including funding organizations, policymakers, program implementers, and program participants—understand whether a program achieves what it claims to achieve.
When you hear "program evaluation," two critical questions typically frame the work:
Does the program work? (effectiveness) — Is the program achieving its intended goals?
Is it worth the cost? (efficiency) — Does the program provide good value for the resources invested?
Evaluators come from diverse disciplinary backgrounds including sociology, psychology, economics, social work, political science, and public administration. This diversity of perspectives enriches evaluation work, as professionals bring different expertise and ways of thinking about complex social programs.
Importantly, evaluations use both quantitative methods (numerical data, statistical analysis) and qualitative methods (interviews, observation, narrative analysis), often combining both to get a complete picture.
The Three Main Evaluation Paradigms
A paradigm is a fundamental way of thinking about and approaching a problem. In evaluation, there are three major paradigms that represent different philosophies about what counts as valid knowledge and how evaluation should be conducted. Understanding these paradigms is crucial because they shape how evaluators design studies, collect data, and interpret findings.
The Positivist Paradigm
The positivist paradigm relies on objective, observable, and measurable data. Think of this approach as focused on "what can we measure and count?"
This paradigm emphasizes:
Quantitative evidence and numerical data
Observable outcomes that can be measured consistently
Reducing bias through standardization and controls
Testing whether programs achieve their stated objectives
Within positivist evaluation, there are several assessment dimensions that evaluators typically examine:
Needs Assessment: Do people actually need what the program offers?
Program Theory Assessment: Is the program's underlying logic sound? (Does the theory of how it should work make sense?)
Process Assessment: Is the program being implemented as designed?
Impact Assessment: Does the program actually produce the intended outcomes?
Efficiency Assessment: What is the cost per participant? Are there ways to achieve the same results more cheaply?
The positivist approach is particularly useful when you need clear, measurable evidence of whether a program works, making it common in public health, education, and government accountability contexts.
The Interpretive Paradigm
The interpretive paradigm takes a fundamentally different approach. Rather than starting with measurement and numbers, it asks: "What are the experiences, perspectives, and understandings of the people involved?"
This paradigm emphasizes:
Understanding stakeholders' perspectives, experiences, and expectations
Recognizing that people's interpretations and meanings matter
Using qualitative methods like observation, interviews, and focus groups
Integrating both qualitative and quantitative data when useful
The interpretive evaluator acts as a guide who helps stakeholders understand their program from the inside. Instead of asking "Does this program reduce test scores by 5 points?" (positivist), an interpretive evaluator might ask "How do students, teachers, and parents experience this program? What do they think is working or not working?"
This approach is particularly valuable when programs aim to improve quality of life, foster understanding, or support personal growth—outcomes that are hard to capture with numbers alone.
The Critical-Emancipatory Paradigm
The critical-emancipatory paradigm goes beyond understanding—it aims for social transformation and empowerment. This paradigm is based on action research and asks: "How can evaluation help empower communities and change oppressive structures?"
This paradigm emphasizes:
Participatory evaluation where community members are active partners in the evaluation process, not just subjects being studied
Activism aimed at addressing power imbalances and social inequities
Understanding and challenging structural barriers that limit program effectiveness
Creating knowledge that leads to action and social change
This approach has been particularly influential in international development and work in developing countries, where external evaluators often lack deep understanding of local contexts and community members are best positioned to identify what needs to change.
A key distinction here: while the positivist evaluator aims to be neutral and objective, and the interpretive evaluator aims to understand different perspectives, the critical-emancipatory evaluator explicitly takes a stance to challenge existing power structures and advocate for marginalized groups.
A Critical Point: Context and Politics in Evaluation
Here's something that can be tricky to understand: evaluation is never purely neutral or objective, regardless of which paradigm is used. All evaluation work occurs within specific socio-political contexts—the existing power structures, ideologies, and political environments of a community or organization.
This matters because:
Evaluation results can be used to advance certain ideological, social, or political agendas
The questions chosen for evaluation reflect values and priorities
The interpretations of findings are influenced by the evaluator's own perspectives and the stakeholder's interests
A program showing positive results in one context might show different results in another context
Understanding this doesn't mean evaluation is useless or that findings can't be trusted. Rather, it means skilled evaluators acknowledge the role of context and work to ensure findings are fair, accurate, and useful regardless of how they might be used politically.
Key Evaluation Frameworks
One of the most influential evaluation frameworks is the CDC Six-Step Framework, developed by the Centers for Disease Control and Prevention. This framework, outlined in the Framework for Program Evaluation in Public Health (1999), provides a structured approach to evaluation that typically includes these steps:
Engage Stakeholders — Identify and involve all relevant parties in planning the evaluation
Define the Program — Clarify what the program is and what it intends to accomplish (often through logic-model development)
Gather Credible Evidence — Collect data using appropriate methods
Judge Feasibility, Propriety, and Accuracy — Apply standards to assess the quality and appropriateness of the evaluation
Ensure Utility — Make sure the evaluation provides useful information
Share Findings and Dissemination — Communicate results to stakeholders and use findings to improve the program
This framework reflects good evaluation practice by emphasizing stakeholder involvement, clear program definition, rigorous evidence collection, and practical use of findings.
<extrainfo>
Additional Evaluation Types and Approaches
Beyond paradigms and frameworks, evaluations can be categorized by their purpose:
Formative Evaluation: Conducted during program development to improve the program
Summative Evaluation: Conducted at the end to determine overall effectiveness
Utilization-Focused Evaluation: Emphasizes designing evaluation around how findings will actually be used
Developmental Evaluation: Supports programs in complex, changing environments by providing real-time feedback
Theory-Driven Evaluation: Tests and refines the program's underlying theory of change
Realist-Driven Evaluation: Focuses on understanding what works, for whom, and under what circumstances
Principles-Focused Evaluation: Evaluates adherence to core principles rather than just outcomes
Context, Input, Process, Product (CIPP) Model Evaluation: Assesses all stages of a program from planning through completion
While these distinctions are useful, they're less central to foundational understanding than the three paradigms and frameworks discussed above.
</extrainfo>
Flashcards
How is program evaluation defined in terms of its process and purpose?
The systematic collection, analysis, and use of information to answer questions about projects, policies, and programs.
What does it mean for a program evaluation to assess effectiveness?
Determining whether a program does what it intends to do.
What does it mean for a program evaluation to assess efficiency?
Determining whether a program provides good value for money.
What are the core emphases of the Positivist Paradigm in evaluation?
Objective, observable, and measurable aspects, emphasizing quantitative evidence.
What is the primary goal of the Interpretive Paradigm in program evaluation?
To understand stakeholder perspectives, experiences, and expectations before judging program merit.
What is the underlying basis and aim of the Critical-Emancipatory Paradigm?
Action research aimed at social transformation and empowerment.
In what geographical context is the Critical-Emancipatory Paradigm especially useful?
Developing countries.
What are the various types of program evaluation listed in the text?
Utilization-Focused Evaluation
CIPP Model (Context, Input, Process, Product)
Formative Evaluation
Summative Evaluation
Developmental Evaluation
Principles-Focused Evaluation
Theory-Driven Evaluation
Realist-Driven Evaluation
Which organization developed the six-step framework for public-health program evaluation in 1999?
The Centers for Disease Control and Prevention (CDC).
What specific developmental tool is included in the CDC's framework for program evaluation?
Logic-model development.
Quiz
Program evaluation - Core Foundations of Evaluation Quiz Question 1: Which of the following questions is an evaluator likely to help answer?
- What is the program’s cost per participant? (correct)
- What color should the program’s logo be?
- How many coffee breaks are scheduled?
- What music should be played at meetings?
Program evaluation - Core Foundations of Evaluation Quiz Question 2: Formative evaluation is primarily conducted to:
- Improve a program while it is being developed (correct)
- Summarize final outcomes after program completion
- Determine legal compliance only
- Assign blame for program failures
Program evaluation - Core Foundations of Evaluation Quiz Question 3: Developmental evaluation is especially useful for:
- Innovative, complex, and evolving programs (correct)
- Static, unchanging initiatives
- Programs with fixed, unalterable designs
- Evaluations that ignore real‑time feedback
Program evaluation - Core Foundations of Evaluation Quiz Question 4: What is a key component of the CDC’s six‑step public‑health evaluation framework?
- Developing a logic model (correct)
- Choosing a corporate brand name
- Designing office interiors
- Scheduling annual holiday parties
Program evaluation - Core Foundations of Evaluation Quiz Question 5: In program evaluation, effectiveness is defined as which of the following?
- Whether the program does what it intends to do (correct)
- The amount of money saved by the program
- The number of staff employed by the program
- The visual appeal of program materials
Program evaluation - Core Foundations of Evaluation Quiz Question 6: Stakeholders primarily examine evaluation results to verify whether programs achieve what?
- Their promised effects (correct)
- Their social media likes
- Their office décor style
- Their trademark slogans
Program evaluation - Core Foundations of Evaluation Quiz Question 7: The positivist evaluation paradigm primarily relies on which type of evidence?
- Quantitative evidence (correct)
- Anecdotal stories
- Artistic performances
- Personal belief statements
Program evaluation - Core Foundations of Evaluation Quiz Question 8: Critical‑emancipatory evaluation is based on action research aimed at achieving what?
- Social transformation and empowerment (correct)
- Purely theoretical modeling
- Strict cost reduction
- Standardized testing without stakeholder input
Program evaluation - Core Foundations of Evaluation Quiz Question 9: Critical‑emancipatory evaluations are especially useful in which type of settings?
- Developing countries (correct)
- High‑tech corporate R&D labs
- Luxury brand marketing campaigns
- Space exploration missions
Program evaluation - Core Foundations of Evaluation Quiz Question 10: Evaluation findings can be employed to influence which of the following?
- Ideological, social, and political agendas (correct)
- Automatic increase of program funding
- Elimination of all future planning activities
- Standardization of global cultural practices
Program evaluation - Core Foundations of Evaluation Quiz Question 11: Which of the following best describes a mixed‑methods approach in program evaluation?
- Combines both quantitative and qualitative data collection and analysis (correct)
- Uses only numerical surveys and statistical tests
- Relies solely on participant observations and narrative accounts
- Employs only archival document review without new data
Program evaluation - Core Foundations of Evaluation Quiz Question 12: In the interpretive evaluation paradigm, how are qualitative and quantitative data typically used together?
- Qualitative insights inform the interpretation of quantitative results (correct)
- Quantitative data are collected but never analyzed
- Qualitative data replace all numerical information
- Both data types are kept completely separate without integration
Which of the following questions is an evaluator likely to help answer?
1 of 12
Key Concepts
Evaluation Paradigms
Positivist paradigm
Interpretive paradigm
Critical‑emancipatory paradigm
Evaluation Models and Frameworks
Utilization‑focused evaluation
CIPP model (Context, Input, Process, Product)
CDC Six‑Step Framework
Types of Evaluation
Formative evaluation
Summative evaluation
Developmental evaluation
Program evaluation
Definitions
Program evaluation
Systematic collection, analysis, and use of information to assess the effectiveness, efficiency, and relevance of projects, policies, or programs.
Positivist paradigm
An evaluation approach that emphasizes objective, observable, and measurable data, often using quantitative methods.
Interpretive paradigm
An evaluation approach that seeks to understand stakeholder perspectives, experiences, and expectations through qualitative methods such as interviews and focus groups.
Critical‑emancipatory paradigm
An action‑research based evaluation approach focused on social transformation, empowerment, and addressing power structures, often involving participatory methods.
Utilization‑focused evaluation
An evaluation model that designs and conducts evaluations with the primary aim of informing intended users and facilitating practical use of findings.
CIPP model (Context, Input, Process, Product)
A comprehensive evaluation framework that examines the context, resources, implementation processes, and outcomes of a program.
Formative evaluation
An assessment conducted during program development or implementation to provide feedback for improvement.
Summative evaluation
An assessment conducted after program completion to determine overall effectiveness and outcomes.
Developmental evaluation
An evaluation approach that supports innovation and adaptation in complex, evolving programs by providing real‑time feedback.
CDC Six‑Step Framework
A public‑health evaluation model developed by the Centers for Disease Control and Prevention that guides evaluators through six systematic steps, including logic‑model development.