Introduction to Program Evaluation
Understand the purpose of program evaluation, the differences between formative and summative approaches, and the key steps of the evaluation cycle.
Summary
Read Summary
Flashcards
Save Flashcards
Quiz
Take Quiz
Quick Practice
What is the systematic process of gathering information about a program’s design, implementation, and outcomes?
1 of 18
Summary
Introduction to Program Evaluation
What Is Program Evaluation?
Program evaluation is the systematic process of gathering information about how a program is designed, implemented, and what results it produces. Think of it as asking critical questions about a program: Is it running as intended? Is it actually working? Is it achieving what stakeholders hoped it would?
The main purpose of program evaluation is to provide evidence-based feedback to help stakeholders—program managers, funders, participants, and policymakers—make informed decisions. These decisions might include whether to continue a program, invest more resources, make improvements, scale it up, or end it altogether.
Why Evaluation Matters: Three Key Benefits
Accountability. Evaluation demonstrates that money and effort are being spent wisely. Funders and the public want to know their investments produce real results.
Learning. Evaluation identifies what works, for whom, and under what conditions. This knowledge helps program staff improve operations and informs better program design in the future.
Evidence for Policy. Evaluation findings provide concrete data that guide decisions about scaling programs, adopting new policies, or allocating resources in specific ways.
Who Uses Evaluation Results?
Different stakeholders rely on evaluation information for different reasons:
Program managers use findings to improve how services are delivered day-to-day
Funders use findings to decide whether to continue funding or invest elsewhere
Participants and clients benefit when evaluations identify gaps in service quality
Policymakers and community leaders use evaluation evidence to assess public impact and inform broader policy decisions
Types of Program Evaluation
Understanding the two main types of evaluation is crucial because they answer different questions and are used at different stages of a program's life.
Formative Evaluation: Looking at Process
Formative evaluation focuses on how a program is being delivered. It examines whether the program is actually being implemented as originally planned and whether participants are receiving the services they're supposed to receive.
Key questions in formative evaluation include:
Are program activities happening as intended?
Are participants actually showing up and engaging?
What barriers or challenges are staff encountering?
How satisfied are participants with the services they're receiving?
Formative evaluation typically occurs during the early and middle phases of a program. It uses the findings to make real-time improvements—staff can adjust operations, remove barriers, and refine how services are delivered.
Example: A new after-school tutoring program could conduct formative evaluation by observing classes and interviewing tutors. If they discover that attendance is low because students don't have transportation, staff can arrange rides or change meeting times.
Summative Evaluation: Looking at Outcomes
Summative evaluation focuses on results and impact after a program has been running for a sufficient period. It asks the bigger-picture question: Did the program actually achieve what it was supposed to achieve?
Key questions in summative evaluation include:
Did the program reach its stated goals?
Did student test scores improve?
Did vaccination rates increase?
Did outcomes actually change for participants?
Summative evaluation typically occurs at the end of a program or after a major phase when there's enough time and data to assess whether outcomes have changed. The findings inform strategic decisions about whether to continue, scale up, or terminate the program.
Example: After the tutoring program has run for a full school year, a summative evaluation might compare students' test score gains to a similar group of students who did not receive tutoring, showing whether the program actually improved academic achievement.
Key Differences Between Formative and Summative Evaluation
| Aspect | Formative | Summative |
|--------|-----------|-----------|
| Focus | Process and implementation | Outcomes and results |
| Timing | Continuous, during program | Single point in time, at end |
| When to Use | Program is new or under revision | Program has run long enough to show results |
| Methods | Qualitative (observations, interviews) | Quantitative (statistics, comparisons) |
| Purpose | Make immediate improvements | Inform strategic decisions |
Tricky Distinction Alert
A common confusion: students sometimes think formative evaluation is only about asking for feedback. In reality, formative evaluation is specifically about evaluating the program's implementation process, not just getting general feedback. Similarly, summative evaluation isn't just a final test—it's a systematic assessment of whether the program's intended outcomes were achieved, often using comparison groups to show that the program actually caused the changes.
The Program Evaluation Cycle
Evaluation follows a logical, structured sequence. Understanding these steps helps you see how all pieces of an evaluation fit together.
Step 1: Define Purpose and Questions
Before collecting any data, evaluators must be crystal clear about what they're trying to learn. This step involves:
Writing an evaluation purpose statement that identifies who will use the findings and why
Developing specific evaluation questions that guide all subsequent decisions
Evaluation questions are typically concrete. Instead of asking "Is the program good?", evaluators ask things like "What percentage of participants completed the program?" or "Did student attendance improve compared to the previous year?"
Step 2: Develop the Evaluation Design
The design matches the evaluation questions with appropriate methods. This step specifies:
Data collection methods such as surveys, interviews, observations, document review, or statistical analysis
Sampling strategy: Who will provide information? Everyone or a selected group?
Timeline: When will data be collected?
Measurement instruments: What specific tools will be used (questionnaires, observation protocols, etc.)?
The key principle: the design should directly answer the evaluation questions you identified in Step 1.
Step 3: Collect Data
This is when evaluators actually gather information from program staff, participants, program records, or external sources. Data collection might happen through:
Questionnaires completed by participants
Focus group discussions
One-on-one interviews with staff
Direct observations of program activities
Extraction of data from existing databases or records
Important ethical consideration: Evaluators must address informed consent (people know they're being evaluated and agree to participate) and confidentiality (protecting people's privacy) during data collection.
Step 4: Analyze Data
Now the information gathered must be examined for patterns and meaning. Analysis looks for connections to the evaluation questions.
Quantitative analysis (with numbers) might involve:
Calculating averages, percentages, or rates
Comparing groups using statistical tests
Determining whether changes are meaningful or just chance
Qualitative analysis (with words) might involve:
Coding interview transcripts—labeling sections with themes
Identifying patterns in what people said
Summarizing key insights from observations
The analysis determines whether program goals were actually met and identifies strengths and areas needing improvement.
Step 5: Report and Use Results
Findings must be communicated clearly to stakeholders in formats they can actually use:
Written reports with clear conclusions
Presentations to decision-makers
Dashboards or infographics
Executive summaries for busy administrators
Actionable recommendations are crucial—the report shouldn't just describe findings, but explain what should be done differently based on those findings.
When stakeholders review the report, they decide how to apply it: Should the program be refined based on formative evaluation findings? Should it be scaled up if summative evaluation shows strong results? Should it be terminated if results are poor?
Bringing It All Together
Now that you understand program evaluation's definition, types, and process, recognize that these elements work together:
A formative evaluation uses the evaluation cycle to improve a program that's currently operating
A summative evaluation uses the same structured cycle to determine whether the program achieved its goals
Both types answer purposeful questions through systematic processes that involve stakeholders and lead to actionable decisions
The core principle underlying all program evaluation is this: Evaluation exists to provide evidence that helps people make better decisions about programs and policies.
Flashcards
What is the systematic process of gathering information about a program’s design, implementation, and outcomes?
Program evaluation
Which major decisions does program evaluation help stakeholders make regarding a program?
Continue
Scale
Modify
End
What core principle describes evaluation as being designed to answer specific questions?
Purposeful
What core principle describes evaluation as following a structured sequence of steps?
Systematic
How does evaluation support learning within a program?
By identifying what works, for whom, and under what conditions
What is the primary focus of a formative evaluation?
How a program is being delivered
During which phases of a program is formative evaluation typically conducted?
Early and middle phases
What is the primary use of formative evaluation results?
To fine-tune program operations and improve delivery
What type of methods, such as observations and staff interviews, does formative evaluation often use?
Qualitative methods
In what specific program scenario should formative evaluation be used?
When a program is new or undergoing major changes
When does a summative evaluation typically take place?
At the end of a program or after a major phase
What does a summative evaluation ask regarding a program's objectives?
Whether the program achieved its stated goals
When is it appropriate to use a summative evaluation?
When the program has operated long enough to produce measurable results
What is the first step in the evaluation cycle?
Define Purpose and Questions
What determines the selection of methods and data sources during the evaluation process?
Specific evaluation questions
What three components are specified during the evaluation design step?
Sampling strategy
Data collection timeline
Measurement instruments
What is the primary goal of the data analysis step in the evaluation cycle?
To summarize findings and look for patterns related to evaluation questions
What essential component must be included in evaluation reports to make them useful?
Actionable recommendations
Quiz
Introduction to Program Evaluation Quiz Question 1: Formative evaluation primarily focuses on which aspect of a program?
- How the program is being delivered (correct)
- Whether the program met its long‑term goals
- The statistical significance of outcome data
- The cost‑effectiveness of program staff salaries
Introduction to Program Evaluation Quiz Question 2: What primary question does summative evaluation address?
- Whether the program achieved its stated goals (correct)
- How the program staff feel about their workload
- What resources are needed for future program expansion
- Which qualitative methods best capture participant experiences
Introduction to Program Evaluation Quiz Question 3: In the evaluation cycle, what does the purpose statement identify?
- Who will use the evaluation findings (correct)
- The exact statistical tests to be performed
- The budget allocated for data collection
- The timeline for publishing the final report
Introduction to Program Evaluation Quiz Question 4: Which group typically uses evaluation results to improve program operations?
- Program managers (correct)
- Funders
- Participants
- Community members and policymakers
Introduction to Program Evaluation Quiz Question 5: Which type of evaluation most often relies on qualitative methods such as observations and staff interviews?
- Formative evaluation (correct)
- Summative evaluation
- Experimental evaluation
- Cost‑effectiveness evaluation
Introduction to Program Evaluation Quiz Question 6: Based on program evaluation, which of the following actions can stakeholders decide to take?
- Continue, scale, modify, or end the program (correct)
- Increase staff salaries
- Change the organization’s mission statement
- Expand office space
Introduction to Program Evaluation Quiz Question 7: When is formative evaluation most appropriate to use?
- When a program is new or undergoing major changes (correct)
- After a program has been operating for several years and results are available
- During the final reporting phase of a completed program
- When budgeting decisions are being made without program data
Introduction to Program Evaluation Quiz Question 8: What is the primary purpose of the design step in the evaluation cycle?
- To select methods that match the evaluation questions (correct)
- To gather raw data from participants and records
- To write and distribute the final evaluation report
- To allocate funding for program activities
Formative evaluation primarily focuses on which aspect of a program?
1 of 8
Key Concepts
Evaluation Types
Formative evaluation
Summative evaluation
Program evaluation
Evaluation Process
Evaluation cycle
Data collection methods
Data analysis
Reporting and use of results
Evaluation Context
Stakeholders
Core principles of evaluation
Benefits of program evaluation
Definitions
Program evaluation
A systematic process of gathering and analyzing information about a program’s design, implementation, and outcomes to inform decisions.
Formative evaluation
An ongoing assessment that focuses on program processes and implementation to improve delivery during development.
Summative evaluation
A final assessment that measures program outcomes and impact to determine effectiveness and guide strategic decisions.
Evaluation cycle
A sequence of steps—including purpose definition, design, data collection, analysis, and reporting—that structures the evaluation process.
Stakeholders
Individuals or groups such as managers, funders, participants, and policymakers who use evaluation findings to make decisions.
Core principles of evaluation
Foundational concepts, such as purposefulness and systematic methodology, that ensure evaluations are rigorous and relevant.
Benefits of program evaluation
Advantages including accountability, learning, and evidence‑based guidance for future program design and policy.
Data collection methods
Techniques such as surveys, interviews, observations, and document review used to gather information for evaluation.
Data analysis
The process of summarizing and interpreting quantitative and qualitative data to answer evaluation questions.
Reporting and use of results
Communicating findings and recommendations to stakeholders to support program improvement, scaling, or termination.