Philosophy of science Study Guide
Study Guide
📖 Core Concepts
Philosophy of science – examines the foundations, methods, and meaning of scientific practice; sits at the intersection of metaphysics, epistemology, and logic.
Demarcation problem – the task of distinguishing genuine science from pseudoscience. Popper’s solution: falsifiability (a claim must be in principle disprovable).
Scientific explanation – traditionally the Deductive‑Nomological (DN) model: phenomena are deduced from universal laws; alternatives include statistical relevance, unification, and causal mechanisms.
Induction problem – inductive reasoning never yields certainty; it only raises the probability of a generalization.
Abduction (Inference to the Best Explanation) – choose the hypothesis that best accounts for the data (often guided by simplicity or unity).
Realism vs. anti‑realism – realists claim successful theories are (approximately) true; anti‑realists/instrumentalists care only about predictive usefulness.
Theory‑laden observation – what we see is filtered through existing theoretical concepts and cognitive frameworks.
Paradigm (Kuhn) – a shared set of achievements, methods, and exemplars that guide “normal science”; shifts occur when anomalies accumulate.
Coherentism / Duhem‑Quine – no isolated hypothesis can be falsified; testing always involves a network of auxiliary assumptions.
Methodological pluralism – Feyerabend’s claim that no single scientific method governs all research.
Randomization & placebo – experimental tools that reduce bias and isolate causal effects in medicine.
---
📌 Must Remember
Popper’s falsifiability: Scientific ⇔ potentially false.
DN model: Explanation = Law + Initial conditions ⟹ Phenomenon.
Induction: Repeated instances → higher probability, not certainty.
Bayesian update: Posterior ∝ Prior × Likelihood.
Occam’s razor: Prefer the simplest viable theory (no universal metric).
Kuhn’s paradigm shift: Anomalies → crisis → revolution → new paradigm.
Duhem‑Quine thesis: A single experiment underdetermines which component of a theory‑network is false.
p‑value definition: $p = P(\text{observed data} \mid H0)$ (probability of data assuming the null is true).
Placebo effect: Improvement due to expectations, not active ingredients.
---
🔄 Key Processes
Falsification (Popper)
Propose conjecture → deduce testable predictions → attempt refutation → if survived, theory is corroborated (not proven).
Abductive inference
List competing hypotheses → evaluate explanatory power, simplicity, coherence → select best‑explaining hypothesis.
Paradigm shift (Kuhn)
Normal science → encounter anomalies → crisis → emergence of a new framework → adoption through social and logical persuasion.
Randomized Controlled Trial (RCT)
Randomly assign participants → apply treatment or placebo → compare outcomes → infer causal effect if groups are equivalent.
Bayesian belief updating
Start with prior $P(H)$ → collect data $D$ → compute likelihood $P(D|H)$ → obtain posterior $P(H|D) = \frac{P(D|H)P(H)}{P(D)}$.
---
🔍 Key Comparisons
Falsifiability vs. Verificationism – Popper: must be refutable; Logical positivists: must be verifiable.
Realism vs. Instrumentalism – Realism cares about truth of unobservables; Instrumentalism cares only about predictive success.
Deductive‑Nomological vs. Statistical Relevance (Salmon) – DN: explanation via universal law; Salmon: explanation via statistical correlation to the outcome.
Reductionism vs. Emergence – Reductionism: higher‑level phenomena fully explainable by lower‑level laws; Emergence (e.g., hierarchical reductionism) allows novel, higher‑level regularities.
Coherentism vs. Foundationalism – Coherentism: justification arises from overall belief coherence; Foundationalism: some beliefs are self‑justified foundations.
---
⚠️ Common Misunderstandings
“Falsification proves a theory true.” – It only shows the theory survived a test; future tests may fail.
“All scientific statements are observable.” – Many legitimate theories involve unobservables (e.g., electrons) that are inferred indirectly.
“Randomization guarantees truth.” – Randomization reduces bias but cannot fix flawed experimental design or confounding variables.
“The simplest theory is always correct.” – Simplicity is a heuristic, not a logical guarantee of truth.
“Science is completely value‑free.” – Epistemic and social values shape question selection, methodology, and interpretation.
---
🧠 Mental Models / Intuition
“Falsifiability filter” – Imagine a sieve that only lets through claims that could be disproved; anything that slips through is not scientific.
“Network of beliefs” – Picture a web where pulling one strand (a hypothesis) affects many others; testing impacts the whole web, not a single node.
“Paradigm as a lens” – Scientists see data through a paradigm‑shaped lens; a shift is like swapping lenses, revealing previously invisible features.
---
🚩 Exceptions & Edge Cases
Statistical explanations – Chance events can be genuine explanations even when no law exists (e.g., radioactive decay).
Greedy reductionism – Over‑simplifying can ignore crucial higher‑level mechanisms (e.g., sociocultural factors in health).
Uniformitarianism limits – In geology, catastrophic events (mass extinctions) are recognized despite the uniformity principle.
Bayesian subjectivity – Prior probabilities reflect personal credence; different scientists may start with different priors.
---
📍 When to Use Which
Falsifiability test → when assessing whether a claim qualifies as science (vs. pseudoscience).
Abductive reasoning → early stage hypothesis generation when multiple explanations compete.
DN model → for phenomena that can be derived from well‑established universal laws (e.g., planetary motion).
Statistical relevance model → when laws are unavailable but strong probabilistic links exist (e.g., epidemiology).
Randomized trial → to establish causal effect of a medical intervention.
Bayesian inference → when prior information is substantial or sequential updating is needed.
---
👀 Patterns to Recognize
Anomaly → crisis → paradigm shift pattern in historical scientific revolutions.
Auxiliary hypothesis rescue – when a prediction fails, scientists often tweak background assumptions rather than discard the core theory.
“Anything goes” – methodological diversity appears especially in interdisciplinary or emerging fields.
Value‑laden language – terms like “normal” vs. “abnormal” often signal underlying sociopolitical influences.
---
🗂️ Exam Traps
Choosing “verification” as the demarcation criterion – exam will likely expect Popper’s falsifiability.
Confusing p‑value with probability that the null hypothesis is true – $p$ is P(data|H₀), not P(H₀|data).
Assuming a single experiment can falsify a theory – Duhem‑Quine shows tests are theory‑network dependent.
Equating “simpler” with “more true” – simplicity is a pragmatic guide, not a logical proof of truth.
Mistaking instrumentalism for denial of reality – instrumentalists accept that theories may be true; they simply don’t require truth for usefulness.
---
or
Or, immediately create your own study flashcards:
Upload a PDF.
Master Study Materials.
Master Study Materials.
Start learning in seconds
Drop your PDFs here or
or