RemNote Community
Community

Study Guide

📖 Core Concepts Program Evaluation – systematic collection, analysis, and use of information to answer key questions about a program’s effectiveness, efficiency, and relevance. Effectiveness vs. Efficiency – Effectiveness: does the program achieve its intended outcomes? Efficiency: does it achieve those outcomes at the lowest possible cost? Logic Model (Program Theory) – visual map of Inputs → Activities → Outputs → Short‑term Outcomes → Long‑term Outcomes that explains how a program is expected to work. Evaluation Paradigms – Positivist (quantitative, objective), Interpretive (stakeholder perspectives, qualitative), Critical‑Emancipatory (social‑justice, participatory). CIPP Model – Context (needs, goals), Input (resources, design), Process (implementation fidelity), Product (outcomes/impact). Five‑Tiered Evaluation – Tier 1 (needs), Tier 2 (monitoring), Tier 3 (quality review), Tier 4 (short‑term outcomes), Tier 5 (long‑term impact). Utilization Types – Persuasive (advocacy), Direct/Instrumental (program change), Conceptual (awareness). Reliability, Validity, Sensitivity – measurement quality dimensions essential for credible results. 📌 Must Remember Evaluation Questions address cost per participant, impact, alternatives, unintended consequences, and goal relevance. CDC Six‑Step Framework: 1) Engage stakeholders, 2) Describe program, 3) Focus design, 4) Gather evidence, 5) Justify conclusions, 6) Ensure use/share lessons. CIPP Timing – Context & Input = pre‑implementation; Process = during implementation; Product = post‑implementation. Cost‑Benefit Ratio – lower ratio ⇒ higher efficiency; static efficiency = least cost for given objectives, dynamic efficiency = continuous improvement. Reliability ↑ → Power ↑; Validity = “measuring the right thing.” Empowerment Evaluation Steps: 1) Mission, 2) Take stock, 3) Plan for future. Internal vs. External Evaluators – internal = insider knowledge, lower cost, possible bias; external = objectivity, expertise, higher cost. 🔄 Key Processes Needs Assessment (Tier 1) Define the problem precisely. Perform gap analysis → identify “where” & “how big.” Analyze whether the proposed plan can eliminate the need. Conduct task analysis → best intervention method. Logic Model Development List Inputs, Activities, Outputs, Short‑term Outcomes, Long‑term Outcomes. Test plausibility via expert review, literature comparison, and observation. Implementation (Process) Assessment Verify target reach, service receipt, staff qualifications. Use repeated measures to monitor fidelity. Impact (Effectiveness) Evaluation Identify observable indicators → measure Outcome Level (single time point) and Outcome Change (difference over time). Estimate Program Effect = portion of outcome change attributable to the program (often via quasi‑experimental or statistical modeling). Efficiency (Cost‑Benefit) Assessment Calculate total costs vs. benefits; compute cost‑benefit ratio. Utilization Planning Align results with stakeholder decision‑making timelines, ensure clear communication, and embed dissemination in the design. 🔍 Key Comparisons Positivist vs. Interpretive Paradigm – quantitative, objective evidence vs. stakeholder‑centered qualitative insights. Internal vs. External Evaluator – insider knowledge & lower cost vs. objectivity & higher expertise. Formative (Tier 3) vs. Summative (Tier 5) Evaluation – ongoing improvement focus vs. final impact determination. Static Efficiency vs. Dynamic Efficiency – lowest cost for current objectives vs. continual program improvement over time. ⚠️ Common Misunderstandings “Reliability guarantees validity.” – Reliable tools can still measure the wrong construct. “Process evaluation = impact evaluation.” – Process checks fidelity; impact tests causal outcomes. “Higher cost always means higher efficiency.” – Efficiency is about ratio, not absolute cost. “External evaluators are always better.” – They may miss contextual nuances that internal staff know. 🧠 Mental Models / Intuition “Input‑Output‑Outcome Funnel” – Visualize a narrowing funnel: many inputs/activities produce many outputs, which funnel into fewer, higher‑value outcomes. “Counterfactual Clock” – When estimating impact, imagine a clock showing what would have happened without the program; the difference is the program effect. “Budget‑Constraint Slider” – Slide left/right to see how shoestring vs. full‑budget designs shift sample size, data sources, and rigor. 🚩 Exceptions & Edge Cases Cultural/Language Barriers – Without lexical and conceptual equivalence, measurement validity collapses. Dynamic Environments – In rapidly changing contexts, developmental evaluation (real‑time feedback) supersedes static summative designs. Limited Data – Shoestring approach relies on secondary data and small samples; triangulation is essential to mitigate bias. 📍 When to Use Which | Decision Trigger | Recommended Approach | |------------------|----------------------| | Early, unknown strategy | Developmental (real‑time) evaluation | | Need to monitor fidelity | Process (Tier 3) or formative CIPP | | Assess short‑term outcomes | Tier 4, quasi‑experimental design | | Assess long‑term impact | Tier 5, impact evaluation with control/comparison groups | | Severe budget & time limits | Shoestring / Five‑Tiered low‑intensity design | | Stakeholder demand for actionable change | Direct/Instrumental utilization focus | | Goal: build community capacity | Empowerment evaluation | 👀 Patterns to Recognize “Gap → Priority → Solution” pattern in needs assessments. Repeated “if‑then” causal chain in logic models (If activity X, then output Y → outcome Z). Triangulation appears whenever data sources are limited (quantitative + qualitative). Stakeholder language mismatches often signal validity threats. 🗂️ Exam Traps Choosing “reliability” when the question asks about “does the tool measure the right construct?” – answer is validity. Confusing “process evaluation” with “impact evaluation.” Process → fidelity; impact → causal outcomes. Assuming a low cost‑benefit ratio automatically means a program is successful. It only indicates efficiency, not effectiveness. Selecting “external evaluator” as the default answer for objectivity – ignore context: internal may be more appropriate when deep program knowledge is needed. Mixing up CDC’s six steps with the CIPP phases – remember CDC is a framework for public‑health evaluation; CIPP is a model mapping to program stages. --- Use this guide for quick recall before the exam – focus on the bolded keywords, the step‑by‑step processes, and the decision tables.
or

Or, immediately create your own study flashcards:

Upload a PDF.
Master Study Materials.
Start learning in seconds
Drop your PDFs here or
or