Introduction to Expected Values
Understand what expected value is, how to calculate it for discrete and continuous variables, and why it matters through linearity and the law of large numbers.
Summary
Read Summary
Flashcards
Save Flashcards
Quiz
Take Quiz
Quick Practice
What is the conceptual definition of expected value in terms of a repeated random experiment?
1 of 13
Summary
Expected Value: A Comprehensive Guide
What is Expected Value?
The expected value of a random variable is the long-run average outcome you would observe if you repeated a random experiment many times. It's denoted by $E[X]$ or sometimes by the Greek letter $\mu$.
Think of expected value as the "center of mass" of a probability distribution—where outcomes tend to cluster when weighted by their probabilities. If you roll a fair die repeatedly and compute the average of your rolls, that average will eventually settle around the expected value of the die.
An important insight: expected value is not required to be a possible outcome of the random variable itself. For example, the expected value of rolling a fair die is 3.5, yet you can never actually roll a 3.5.
Expected Value for Discrete Random Variables
For a discrete random variable—one that can take on a finite or countably infinite number of specific values—computing the expected value is straightforward.
If your random variable $X$ can take values $x1, x2, \dots, xk$ with probabilities $p1, p2, \dots, pk$ respectively, then:
$$E[X] = \sum{i=1}^{k} xi \, pi$$
In words: multiply each possible outcome by its probability, then add all these products together.
Remember: the probabilities must satisfy $\sum{i=1}^{k} pi = 1$ (they sum to 1).
Example: Rolling a Fair Die
A fair six-sided die has outcomes 1, 2, 3, 4, 5, 6, each with probability $\frac{1}{6}$. The expected value is:
$$E[X] = 1 \cdot \frac{1}{6} + 2 \cdot \frac{1}{6} + 3 \cdot \frac{1}{6} + 4 \cdot \frac{1}{6} + 5 \cdot \frac{1}{6} + 6 \cdot \frac{1}{6} = \frac{21}{6} = 3.5$$
Each outcome contributes equally to the average because each has equal probability.
Expected Value for Continuous Random Variables
For continuous random variables—which can take any value within an interval—expected value is computed using integration instead of summation.
If $X$ is a continuous random variable with probability density function $f(x)$, then:
$$E[X] = \int{-\infty}^{\infty} x \, f(x) \, dx$$
The probability density function must satisfy $\int{-\infty}^{\infty} f(x) \, dx = 1$, analogous to probabilities summing to 1 in the discrete case.
The integral averages the outcomes weighted by their probability density. Just as with the discrete case, you're multiplying values by their probabilities (or probability densities) and summing the contributions.
Key Properties of Expected Value
Linearity of Expectation
One of the most powerful properties is linearity of expectation. For any constants $a$ and $b$ and any random variables $X$ and $Y$:
$$E[aX + bY] = a \, E[X] + b \, E[Y]$$
This property is remarkable because it holds regardless of whether $X$ and $Y$ are independent. This makes expected value incredibly useful: you can break complex expressions into simpler pieces, compute their expectations separately, and combine the results.
Example: If $X$ has expected value 10 and $Y$ has expected value 5, then $E[2X + 3Y] = 2(10) + 3(5) = 35$.
Non-negativity of Expectation
If a random variable $X$ is always non-negative (i.e., $X \geq 0$ always), then its expected value must also be non-negative: $E[X] \geq 0$.
This is intuitive: if you only have non-negative outcomes and you average them (weighted by probability), you cannot get a negative result.
The Law of Large Numbers: Why Expected Value Matters
The Law of Large Numbers explains why expected value is such a powerful concept. This theorem states that if you repeatedly draw independent samples from the same distribution, the average of those samples converges toward the expected value as the number of samples increases.
In other words, if you have independent, identically distributed observations $X1, X2, X3, \ldots, Xn$ from a random variable with expected value $\mu$, then the sample average
$$\bar{X}n = \frac{1}{n}(X1 + X2 + \cdots + Xn)$$
gets closer and closer to $\mu$ as $n$ grows large. This explains the intuitive interpretation of expected value as a "long-run average."
This convergence is not just a theoretical nicety—it's the foundation for why we can use expected value as a reliable predictor of behavior over time, which makes it essential for practical applications in finance, insurance, and engineering.
Why Expected Value Matters in Statistics and Probability
Expected value is a cornerstone concept with wide-ranging applications:
Foundation for variance: Variance, which measures the spread of a distribution, is itself defined using expected value: $\text{Var}(X) = E[(X - E[X])^2]$. You cannot understand variability without understanding expectation.
Higher-order moments: Expected value extends to computing second moments $E[X^2]$, skewness, and other descriptors of a distribution's shape.
Risk assessment: In finance and insurance, expected value helps quantify risk. Insurance premiums are priced based on the expected value of claims. Investment decisions depend on expected returns.
Statistical estimation: When designing estimators (methods to guess unknown population parameters), expected value guides us toward unbiased estimators—those whose expected value equals the true parameter.
Decision-making: Expected value provides a rational framework for comparing uncertain outcomes, which is essential in business, medicine, and policy decisions.
Flashcards
What is the conceptual definition of expected value in terms of a repeated random experiment?
The long-run average outcome.
What physical concept does the expected value represent in a probability distribution?
The center of mass.
Is the expected value of a random variable required to be one of its possible outcomes?
No.
For a discrete random variable $X$, what is the formula for expected value $E[X]$?
$E[X]=\sum{i=1}^{k} xi pi$ (where $xi$ are outcomes and $pi$ are their probabilities).
What condition must the probabilities $pi$ of a discrete random variable satisfy?
$\sum{i=1}^{k} pi = 1$.
For a continuous random variable $X$ with density $f(x)$, what is the formula for expected value $E[X]$?
$E[X]=\int{-\infty}^{\infty} x f(x) dx$.
What is the required integral of a probability density function $f(x)$ over all space?
$\int{-\infty}^{\infty} f(x) dx = 1$.
What is the formula for the linearity of expectation for $E[aX + bY]$?
$a E[X] + b E[Y]$ (where $a, b$ are constants and $X, Y$ are random variables).
Does the linearity of expectation require the random variables $X$ and $Y$ to be independent?
No, it holds regardless of independence.
If a random variable $X$ is always non-negative, what must be true of its expected value $E[X]$?
It is also non-negative.
How does the sample average of many independent draws behave according to the Law of Large Numbers?
It converges toward the expected value.
What two conditions must the draws of a random variable satisfy for the Law of Large Numbers to apply?
Independent
Identically distributed
What statistical measure of spread is defined using the expected value?
Variance.
Quiz
Introduction to Expected Values Quiz Question 1: What does the expected value represent in a repeated random experiment?
- the long‑run average outcome (correct)
- the most likely single outcome
- the median of the distribution
- the mode of the distribution
Introduction to Expected Values Quiz Question 2: Which integral defines the expected value of a continuous random variable?
- \(\displaystyle \int_{-\infty}^{\infty} x\,f(x)\,dx\) (correct)
- \(\displaystyle \int_{-\infty}^{\infty} f(x)\,dx\)
- \(\displaystyle \int_{-\infty}^{\infty} x^{2}\,f(x)\,dx\)
- \(\displaystyle \int_{-\infty}^{\infty} |x|\,f(x)\,dx\)
Introduction to Expected Values Quiz Question 3: What does the linearity property of expectation state?
- \(E[aX + bY] = a\,E[X] + b\,E[Y]\) (correct)
- \(E[aX + bY] = a\,E[X] + bY\)
- \(E[aX + bY] = a + b\,E[Y]\)
- \(E[aX + bY] = a\,E[X] + b\,E[Y] + \operatorname{Cov}(X,Y)\)
Introduction to Expected Values Quiz Question 4: What condition must the probabilities $p_i$ satisfy in a discrete probability distribution?
- Their sum must equal 1 (correct)
- All of them must be equal
- Each $p_i$ must be greater than 0.5
- Their sum must equal the number of possible outcomes
Introduction to Expected Values Quiz Question 5: Why is the expectation $E[X]$ guaranteed to be non‑negative when $X\ge 0$ for every outcome?
- Because $E[X]$ is a sum (or integral) of non‑negative terms (correct)
- Because probabilities must be negative to offset positive values
- Because the variance of $X$ is always zero
- Because the median of $X$ equals its mean
Introduction to Expected Values Quiz Question 6: Which condition is required for the law of large numbers to hold?
- The draws must be independent and identically distributed (correct)
- The draws may be dependent as long as they have the same mean
- The random variable must be continuous only
- The sample size must be less than 30
Introduction to Expected Values Quiz Question 7: Which expression correctly defines the variance of a random variable \(X\) using expected value?
- Var\(X\) = \(E[(X - E[X])^{2}]\) (correct)
- Var\(X\) = \((E[X])^{2}\)
- Var\(X\) = \(E[X] - E[X^{2}]\)
- Var\(X\) = \(E[X^{2}] + (E[X])^{2}\)
Introduction to Expected Values Quiz Question 8: What is the second moment of a random variable \(X\) about the origin?
- \(E[X^{2}]\) (correct)
- \(E[X]\)
- \(E[(X-\mu)^{3}]\)
- Var\(X\)
Introduction to Expected Values Quiz Question 9: An estimator \(\hat\theta\) is unbiased if which condition involving expected value holds?
- \(E[\hat\theta] = \theta\) (correct)
- \(Var(\hat\theta) = 0\)
- \(E[\hat\theta] > \theta\)
- \(E[\hat\theta]\) is symmetric around \(\theta\)
What does the expected value represent in a repeated random experiment?
1 of 9
Key Concepts
Random Variables
Discrete random variable
Continuous random variable
Probability mass function
Probability density function
Statistical Concepts
Expected value
Variance
Linearity of expectation
Law of large numbers
Definitions
Expected value
The long‑run average outcome of a random experiment, representing the mean of a probability distribution.
Discrete random variable
A random variable that can assume only a countable set of distinct values, each with an associated probability.
Continuous random variable
A random variable whose possible values form a continuum and is described by a probability density function.
Linearity of expectation
The property that the expected value of a linear combination of random variables equals the same linear combination of their expectations, regardless of independence.
Law of large numbers
A theorem stating that the sample average of independent, identically distributed random variables converges to their expected value as the number of observations increases.
Probability density function
A function that gives the relative likelihood of each outcome for a continuous random variable, integrating to one over its entire range.
Probability mass function
A function that assigns probabilities to each possible outcome of a discrete random variable, with the probabilities summing to one.
Variance
The expected squared deviation of a random variable from its mean, quantifying the spread or dispersion of its distribution.