Sensitivity and specificity Study Guide
Study Guide
📖 Core Concepts
Sensitivity (True Positive Rate): Probability test is positive given the person truly has the disease.
Specificity (True Negative Rate): Probability test is negative given the person truly does not have the disease.
Both are intrinsic properties of a test – they do not change with disease prevalence.
Positive Predictive Value (PPV) = probability disease is present given a positive result.
Negative Predictive Value (NPV) = probability disease is absent given a negative result.
Likelihood Ratios combine sensitivity and specificity to show how much a result shifts disease odds.
---
📌 Must Remember
Formulas
Sensitivity $= \dfrac{TP}{TP+FN}$
Specificity $= \dfrac{TN}{TN+FP}$
PPV $= \dfrac{TP}{TP+FP}$
NPV $= \dfrac{TN}{TN+FN}$
Positive LR $= \dfrac{\text{sensitivity}}{1-\text{specificity}}$
Negative LR $= \dfrac{1-\text{sensitivity}}{\text{specificity}}$
Mnemonics
SnNout – a Sensitive test that is Negative rules out disease.
SpPin – a Specific test that is Positive rules in disease.
Trade‑off: Raising the cutoff ↑ sensitivity ↓ specificity; lowering the cutoff does the opposite.
ROC Curve plots sensitivity vs. $1-\text{specificity}$ for every possible cutoff; the area under the curve (AUC) summarizes overall discriminative ability.
Power = Sensitivity in hypothesis‑testing language; higher power → fewer type II (false‑negative) errors.
---
🔄 Key Processes
Compute Sensitivity & Specificity from a 2 × 2 table
Fill counts: TP, FP, TN, FN.
Apply formulas above.
Derive Predictive Values (need disease prevalence or pre‑test probability).
Create an ROC Curve
For each possible cutoff, calculate sensitivity & $1-\text{specificity}$.
Plot points, connect them; compute AUC if required.
Calculate Likelihood Ratios
Use LR⁺ and LR⁻ formulas; then apply Bayes’ theorem to update pre‑test odds to post‑test odds.
Construct a 95 % Confidence Interval (e.g., Wilson score) for sensitivity or specificity when sample size is modest.
---
🔍 Key Comparisons
Sensitivity vs. Specificity
Goal: Sensitivity → “rule out” (SnNout); Specificity → “rule in” (SpPin).
Screening Test vs. Diagnostic Test
Screening: high sensitivity, tolerates false positives.
Diagnostic: high specificity, tolerates false negatives.
PPV vs. Sensitivity
PPV depends on prevalence; Sensitivity does not.
ROC AUC vs. Single Cutoff Metrics
AUC reflects overall test performance across all thresholds; a single sensitivity/specficity pair reflects performance at one chosen threshold.
---
⚠️ Common Misunderstandings
“100 % sensitivity = perfect test” – only true for ruling out disease; the test may have 0 % specificity (all positives).
Confusing PPV with sensitivity – PPV changes with prevalence, sensitivity does not.
Assuming LR⁺ > 1 always means disease is present – must still consider pre‑test probability; a modest LR⁺ on a low‑prevalence disease may not change odds much.
Treating a single measure (e.g., only sensitivity) as sufficient – clinical decisions require both sensitivity, specificity, and prevalence context.
---
🧠 Mental Models / Intuition
“Net” analogy: Sensitivity is the size of the net catching diseased fish; specificity is the tightness that lets healthy fish slip through.
Likelihood Ratio as “Odds Multiplier”: LR⁺ multiplies pre‑test odds to give post‑test odds when the test is positive; LR⁻ does the same when negative.
ROC Curve as “Performance Landscape”: The farther the curve bows toward the upper left corner, the better the test can separate disease from health regardless of cutoff.
---
🚩 Exceptions & Edge Cases
Very low prevalence → PPV can be low even with high specificity; NPV remains high.
Small sample sizes → point estimates of sensitivity/specificity become unstable; confidence intervals widen dramatically.
Tests with perfect sensitivity or specificity are rare; if observed, suspect verification bias or mis‑classification of the gold standard.
---
📍 When to Use Which
Screening → Choose a test with ≥ 90 % sensitivity; accept lower specificity.
Confirmatory Diagnosis → Prioritize ≥ 90 % specificity; tolerate lower sensitivity.
Estimating disease probability → Use LR⁺ / LR⁻ with Bayes’ theorem instead of raw sensitivity/specificity.
Comparing two tests → Look at AUC of ROC curves; if AUCs are similar, compare at the clinically relevant cutoff.
---
👀 Patterns to Recognize
High sensitivity + negative result → disease is unlikely (SnNout).
High specificity + positive result → disease is likely (SpPin).
ROC curve hugging the left‑hand border → excellent discrimination.
LR⁺ > 10 or LR⁻ < 0.1 → strong evidence to rule in/out disease.
---
🗂️ Exam Traps
Choosing PPV when the question asks for sensitivity – remember PPV varies with prevalence.
Selecting a test with 100 % sensitivity as “best” – ignore the accompanying 0 % specificity unless the question explicitly wants a rule‑out scenario.
Confusing false‑positive rate with (1‑specificity) – they are the same, but many students write the opposite; double‑check the formula.
Misreading “SnNout” as “sensitive test, negative = rule in” – the mnemonic’s “OUT” signals rule out.
Assuming ROC AUC = 0.5 means the test is useless – 0.5 is indeed random, but sometimes a question will give an AUC of 0.7 and expect you to say “moderate discrimination”.
---
or
Or, immediately create your own study flashcards:
Upload a PDF.
Master Study Materials.
Master Study Materials.
Start learning in seconds
Drop your PDFs here or
or