Probability Study Guide
Study Guide
📖 Core Concepts
Probability: Numeric measure of how likely an event is, ranging from 0 (impossible) to 1 (certain).
Sample Space ($S$): The set of all possible outcomes of an experiment.
Event: Any subset of $S$ (e.g., $A$, $B$).
Kolmogorov Axioms:
$0 \le P(E) \le 1$ for any event $E$.
$P(S)=1$.
For countable mutually exclusive events $Ei$, $P\big(\bigcupi Ei\big)=\sumi P(Ei)$.
Complement: $\overline{A}$ contains all outcomes not in $A$, with $P(\overline{A}) = 1 - P(A)$.
Addition Rules:
Mutually exclusive: $P(A\cup B)=P(A)+P(B)$.
General: $P(A\cup B)=P(A)+P(B)-P(A\cap B)$.
Independence: $A$ and $B$ independent ⇔ $P(A\cap B)=P(A)P(B)$.
Conditional Probability: $P(A\mid B)=\dfrac{P(A\cap B)}{P(B)}$, $P(B)>0$.
Bayes’ Theorem: $P(H\mid E)=\dfrac{P(E\mid H)P(H)}{P(E)}$.
Interpretations: Frequentist (long‑run frequency), Propensity (tendency of a single trial), Subjective/Bayesian (degree of belief).
Bayesian Updating: Prior $\times$ Likelihood → Posterior (normalized).
---
📌 Must Remember
$0\le P(E)\le 1$ and $P(S)=1$.
Complement rule: $P(\overline{A}) = 1 - P(A)$.
General addition rule: subtract the overlap $P(A\cap B)$.
Independence test: $P(A\cap B)=P(A)P(B)$.
Conditional probability formula requires $P(B)>0$.
Bayes’ theorem links prior, likelihood, and posterior.
In continuous spaces, a single point can have probability zero yet be possible.
Frequentist probability = long‑run relative frequency; Bayesian probability = personal belief.
---
🔄 Key Processes
Computing a Theoretical Probability
List all equally likely outcomes.
Count outcomes that satisfy the event.
Divide: $\displaystyle P = \frac{\text{favorable outcomes}}{\text{total outcomes}}$.
Updating Beliefs with Bayes’ Theorem
Identify hypothesis $H$ (prior $P(H)$) and evidence $E$ (likelihood $P(E\mid H)$).
Compute marginal $P(E)=\sumi P(E\mid Hi)P(Hi)$ (if multiple hypotheses).
Apply $P(H\mid E)=\frac{P(E\mid H)P(H)}{P(E)}$.
Normalize if needed.
Checking Independence
Compute $P(A)$, $P(B)$, and $P(A\cap B)$.
Verify $P(A\cap B) \stackrel{?}{=} P(A)P(B)$.
Applying the General Addition Rule
Find $P(A)$, $P(B)$, and $P(A\cap B)$.
Plug into $P(A\cup B)=P(A)+P(B)-P(A\cap B)$.
---
🔍 Key Comparisons
Frequentist vs. Subjective (Bayesian)
Frequentist: Probability = long‑run frequency of repeated trials.
Bayesian: Probability = personal degree of belief, updated with data.
Theoretical vs. Empirical Probability
Theoretical: Calculated from counting outcomes.
Empirical: Measured from observed relative frequencies.
Mutually Exclusive vs. Independent
Mutually Exclusive: $A\cap B=\varnothing$ → $P(A\cup B)=P(A)+P(B)$.
Independent: Occurrence of one does not affect the other's probability → $P(A\cap B)=P(A)P(B)$.
Complement vs. Union
Complement: $P(\overline{A}) = 1 - P(A)$.
Union: $P(A\cup B)=P(A)+P(B)-P(A\cap B)$.
---
⚠️ Common Misunderstandings
“Zero probability means impossible.” In continuous distributions, single points have probability 0 but can occur.
Confusing mutually exclusive with independent. Mutually exclusive events cannot occur together, so they are not independent (unless one has probability 0).
Forgetting to normalize in Bayes’ theorem. The denominator $P(E)$ is essential; omitting it yields an unscaled posterior.
Assuming $P(A\mid B)=P(B\mid A)$. They are generally different; swap numerator and denominator only when $P(A)=P(B)$.
---
🧠 Mental Models / Intuition
Probability as “share of the pie.” Imagine the sample space as a pie; each outcome gets a slice proportional to its probability.
Bayes as “belief upgrade.” Prior = current belief; evidence is new information that nudges the belief toward the posterior.
Independence = “no domino effect.” One event’s outcome doesn’t tip the scales for the other.
---
🚩 Exceptions & Edge Cases
Continuous Sample Spaces: Individual points have $P=0$, but intervals have non‑zero probability.
Zero‑Probability Conditioning: $P(B)=0$ makes $P(A\mid B)$ undefined; avoid conditioning on impossible events.
Non‑mutually exclusive events: Must use the general addition rule, not the simple sum.
---
📍 When to Use Which
Use Theoretical Probability when outcomes are equally likely and can be enumerated (e.g., dice, cards).
Use Empirical Probability when you have experimental data but no clear counting model.
Apply Bayes’ Theorem when you have a prior belief and new evidence to update it.
Use the General Addition Rule whenever events may overlap; reserve the simple sum for mutually exclusive cases.
Test for Independence before multiplying probabilities; if independence fails, revert to conditional probability: $P(A\cap B)=P(A)P(B\mid A)$.
---
👀 Patterns to Recognize
“Given” statements → Conditional probability; look for $P(\text{event}\mid\text{condition})$.
“Prior” and “likelihood” language → Bayes’ theorem.
“Either…or…both” → General addition rule (subtract the overlap).
Repeated independent trials (e.g., multiple coin flips) → Multiply individual probabilities.
---
🗂️ Exam Traps
Choosing the simple addition rule for overlapping events – leads to double‑counting; remember to subtract $P(A\cap B)$.
Treating mutually exclusive events as independent – they are actually dependent (if one occurs, the other cannot).
Omitting the denominator $P(E)$ in Bayes’ theorem – results in an unnormalized “posterior”.
Confusing $P(A\mid B)$ with $P(B\mid A)$ – swap only when probabilities are symmetric.
Assuming a zero‑probability event cannot happen – in continuous distributions, points have zero probability yet are possible.
or
Or, immediately create your own study flashcards:
Upload a PDF.
Master Study Materials.
Master Study Materials.
Start learning in seconds
Drop your PDFs here or
or