RemNote Community
Community

Study Guide

📖 Core Concepts Evidence‑Based Medicine (EBM) – The systematic integration of the best current research evidence, clinician expertise, and patient values to guide individual patient care. Three Pillars of EBM – (1) Best research evidence, (2) Clinical expertise, (3) Patient preferences. Clinical Question Framework (PICO‑T) – Population, Intervention, Comparison, Outcomes, Time horizon, Setting. Hierarchy of Evidence – Systematic reviews of high‑quality RCTs → single RCTs → well‑designed cohort/case‑control → case series → expert opinion. GRADE Quality Levels – High, Moderate, Low, Very low (confidence that the estimate reflects the true effect). USPSTF Grades – A (strong benefit), B (moderate benefit), C (close balance), D (harm outweighs benefit), I (insufficient evidence). --- 📌 Must Remember Highest‑yield evidence = systematic review/meta‑analysis of randomized, double‑blind, placebo‑controlled trials with allocation concealment and complete follow‑up. USPSTF Grade A → recommend; Grade D → recommend against; Grade I → discuss uncertainty. GRADE strong recommendation = benefits clearly outweigh harms and evidence quality is at least moderate. Number Needed to Treat (NNT) = $1 / \text{ARR}$, where ARR = control event rate – experimental event rate. Likelihood Ratio (LR) = Sensitivity / (1‑Specificity) for a positive test; post‑test odds = pre‑test odds × LR. Key 7 Steps of EBM – Ask, Acquire, Appraise, Apply, Evaluate, Disseminate, Integrate. --- 🔄 Key Processes | Process | Steps (concise) | |--------|-----------------| | Guideline Development (10 steps) | 1️⃣ Formulate PICO‑T question 2️⃣ Search literature 3️⃣ Interpret each study 4️⃣ Meta‑analyse if multiple 5️⃣ Create evidence tables 6️⃣ Build benefit‑harm‑cost balance sheet 7️⃣ Draw conclusion 8️⃣ Write guideline & rationale 9️⃣ Peer review each step 10️⃣ Implement | | Individual Clinical Decision (5 steps) | 1️⃣ Translate uncertainty → answerable question (consider design/level) 2️⃣ Systematically retrieve best evidence 3️⃣ Critically appraise (bias, confounding, effect size, precision, external validity) 4️⃣ Apply results to patient 5️⃣ Evaluate performance after implementation | | GRADE Assessment | Assess risk of bias, inconsistency, indirectness, imprecision, publication bias → downgrade; consider large effect, dose‑response, plausible confounding → upgrade. | | Meta‑analysis | 1️⃣ Define inclusion criteria 2️⃣ Systematically collect studies 3️⃣ Extract effect sizes 4️⃣ Weight by inverse variance 5️⃣ Compute pooled estimate (fixed or random effects) 6️⃣ Assess heterogeneity (I²). | --- 🔍 Key Comparisons RCT vs Observational Study Randomization → controls confounding → higher internal validity. Observational → reflects real‑world practice → better external validity but higher bias risk. USPSTF Grade A vs B A: Strong evidence, substantial net benefit → routine offer. B: Moderate evidence, net benefit → discuss and consider patient preference. Likelihood Ratio (LR+) vs Odds Ratio (OR) LR+: Diagnostic test metric, directly updates pre‑test odds (Bayes). OR: Measure of association in analytic studies; not used for post‑test probability. Systematic Review vs Narrative Review Systematic: Pre‑specified protocol, exhaustive search, reproducible. Narrative: Selective, potentially biased, no formal appraisal. --- ⚠️ Common Misunderstandings “EBM ignores patient individuality.” – Patient values are a core pillar; guidelines are starting points, not mandates. “All RCTs are high quality.” – Quality depends on blinding, allocation concealment, follow‑up completeness, and homogeneity. “A statistically significant p‑value guarantees clinical importance.” – Always check effect size, confidence interval, and NNT/NH. “Publication bias only affects meta‑analyses.” – It also skews the apparent strength of evidence in any literature review. --- 🧠 Mental Models / Intuition Bayes’ Theorem as a “Probability Slider”: Think of the pre‑test odds as a slider; a high LR pushes the slider far toward certainty, a low LR barely moves it. Evidence Pyramid: Visualize evidence quality as a pyramid—base = expert opinion, apex = systematic review of RCTs. The higher you climb, the more confident you can be. Decision Balance Sheet = Scale: List benefits on one side, harms/costs on the other; the side that outweighs determines the recommendation strength. --- 🚩 Exceptions & Edge Cases When RCTs are unethical or impossible → rely on well‑conducted cohort or case‑control studies, but downgrade confidence. Heterogeneous populations → subgroup analyses may be needed; a “high‑quality” overall estimate may not apply to a specific subgroup. Very low‑frequency outcomes → even large RCTs may lack power; consider observational data despite lower hierarchy. Conflicts of interest → downgrade evidence quality if industry funding is present and not adequately addressed. --- 📍 When to Use Which Use LR+ when you have a diagnostic test result and need to update disease probability. Use NNT when communicating the absolute benefit of an intervention to patients (easier to grasp than relative risk reduction). Use GRADE for formal guideline panels or when you must justify recommendation strength to policymakers. Use USPSTF grades when counseling preventive services (screening, counseling, immunizations). Use systematic review/meta‑analysis when multiple small studies address the same question and you need a precise pooled estimate. --- 👀 Patterns to Recognize “Benefit‑risk balance close to 1” → look for Grade C or a weak GRADE recommendation. High I² (>50%) in meta‑analysis → suspect inconsistency → may downgrade evidence. Large effect size (RR > 2 or < 0.5) with a dose‑response trend → potential for upgrading observational evidence. Repeated “expert opinion” citations across guidelines → red flag for low‑quality evidence base. --- 🗂️ Exam Traps Confusing LR+ with OR – LR updates probability; OR does not. Choosing “high quality” because study is an RCT – ignore blinding, allocation concealment, loss to follow‑up. Assuming “Grade A” = “strong recommendation” – USPSTF grade reflects net benefit, not GRADE strength; GRADE uses separate “strong/weak” terminology. NNT = 1/ARR – remember ARR must be expressed as a proportion (e.g., 0.05), not a percentage (5%). Misreading “Level II‑1” as “low quality” – In USPSTF hierarchy, Level II‑1 is a well‑designed non‑randomized trial, still moderate quality. ---
or

Or, immediately create your own study flashcards:

Upload a PDF.
Master Study Materials.
Start learning in seconds
Drop your PDFs here or
or