Hypothesis Study Guide
Study Guide
📖 Core Concepts
Hypothesis – a testable, reproducible explanation for a phenomenon, grounded in observation.
Working hypothesis – a provisional idea used to steer research; discarded when better explanations appear.
Scientific hypothesis requirements – educated guess, operational definition, falsifiability, and reproducibility.
Types of hypotheses
Mathematical model – expressed with equations.
Existential – claims a particular instance has a property.
Universal – claims every instance has a property.
Entrepreneurial – tested by verifiable/falsifiable experiments.
Evaluation criteria – testability/falsifiability, parsimony (Occam’s Razor), scope, fruitfulness, conservatism.
Statistical hypothesis testing – compares a null hypothesis (H₀) (no effect/relationship) with an alternative hypothesis (H₁) (effect exists).
Significance level (α) – pre‑chosen probability of wrongly rejecting H₀ (commonly .10, .05, .01).
Effect size – quantitative magnitude of a result (small, medium, large) used to gauge practical importance.
---
📌 Must Remember
A hypothesis must be falsifiable (Popper).
Parsimony: fewer assumptions = stronger hypothesis, all else equal.
Scope: broader applicability = higher value, but may reduce parsimony.
Null vs. Alternative
H₀: no relationship/effect.
H₁: some relationship/effect (two‑sided if direction unknown, one‑sided if direction predicted).
α must be set before data collection; never after looking at results.
Power depends on α, effect size, and sample size – larger N → higher power.
Effect‑size categories (Cohen’s d, r, etc.) are context‑specific; define them for each test.
---
🔄 Key Processes
Formulating a hypothesis
Observe → generate educated guess → write in operational, testable terms.
Evaluating a hypothesis (criteria checklist)
Is it falsifiable? → Is it parsimonious? → What is its scope? → Is it fruitful? → Does it align with existing knowledge (conservatism)?
Statistical testing workflow
State H₀ and H₁.
Choose α before data collection.
Determine required sample size for desired power (often 0.80).
Collect data, compute test statistic.
Compare p‑value to α → reject or fail to reject H₀.
Report effect size and confidence interval.
---
🔍 Key Comparisons
Existential vs. Universal
Existential: “∃ x such that P(x)” (at least one case).
Universal: “∀ x , P(x)” (all cases).
Working hypothesis vs. Theory
Working: provisional, guides next steps, may be discarded.
Theory: robust, repeatedly confirmed, explains many phenomena.
Two‑sided vs. One‑sided alternative
Two‑sided: tests for any difference (↑ or ↓).
One‑sided: tests for a specific direction; higher power but risk of missing opposite effect.
Parsimony vs. Scope
Parsimony: fewer entities → preferred.
Scope: wider applicability → higher value but may require more complexity.
---
⚠️ Common Misunderstandings
“A hypothesis is a guess” → It must be educated and testable; random speculation is not a scientific hypothesis.
“If H₀ is not rejected, it is proven true” → Failure to reject only means insufficient evidence; H₀ remains tentative.
“Higher α = better” → Larger α raises Type I error risk (false positive).
“A statistically significant result equals a large effect” → Significance only speaks to probability; effect size measures magnitude.
“One‑sided tests are always superior” → Use only when theory a priori predicts direction; otherwise you invite bias.
---
🧠 Mental Models / Intuition
Falsifiability filter: Imagine a hypothesis as a door; if any experiment can smash the door down, it passes the filter.
Parsimony balance beam: Picture a scale with “simplicity” on one side and “explanatory power” on the other; the best hypothesis balances them.
Null hypothesis as a default setting: Treat H₀ like a computer’s default state—only change it when evidence clearly pushes you away.
---
🚩 Exceptions & Edge Cases
Non‑falsifiable statements (e.g., “the universe is infinite”) are philosophical, not scientific hypotheses.
Small sample, huge effect: May achieve significance but be unreliable; beware of over‑interpreting.
Multiple comparisons: Each additional test inflates overall Type I error; apply corrections (Bonferroni, FDR).
---
📍 When to Use Which
Use a working hypothesis when exploring a new area and need a guiding framework.
Adopt a universal hypothesis if theory predicts a rule that should hold for all instances.
Choose a one‑sided alternative only when prior theory strongly predicts direction and a two‑sided test would waste power.
Select a larger α (e.g., .10) for exploratory pilot studies; use .05 or .01 for confirmatory research.
Apply parsimonious models for initial testing; expand scope only after basic validation.
---
👀 Patterns to Recognize
“No relation” wording in a question usually signals H₀.
Effect‑size language (“small/medium/large”) hints that you must report magnitude, not just p‑value.
Scope clues (“applies to all species”) indicate a universal hypothesis.
“Pre‑specify” language (α, sample size) flags a well‑designed experiment.
---
🗂️ Exam Traps
Trap 1: Selecting a one‑sided test when the question never specified direction → penalty for unjustified power increase.
Trap 2: Confusing “failure to reject H₀” with “prove H₀ true.”
Trap 3: Ignoring the need to define effect‑size categories; exam may ask for Cohen’s d interpretation.
Trap 4: Choosing α after seeing the data (e.g., “p‑value is .06, so I’ll set α=.07”) – invalid.
Trap 5: Overlooking parsimony; picking a complex model when a simpler one explains data equally well.
---
or
Or, immediately create your own study flashcards:
Upload a PDF.
Master Study Materials.
Master Study Materials.
Start learning in seconds
Drop your PDFs here or
or