Time series Study Guide
Study Guide
📖 Core Concepts
Time series – ordered sequence of observations taken at equally‑spaced time points (seconds, days, years, …).
Temporal ordering – unlike cross‑sectional data, the order matters; nearby points are usually more strongly related.
Stochastic process – a probabilistic model that generates the series, capturing randomness.
Stationarity
Strict – full joint distribution unchanged over time.
Wide‑sense – constant mean and autocovariance that depends only on the lag.
Ergodicity – time averages equal ensemble averages, so a single long series can be used to estimate statistical properties.
Goal of analysis – extract statistics (trend, seasonality, irregular fluctuations) and build a model for forecasting future values.
📌 Must Remember
Autocorrelation = similarity of a series with its own lagged version.
Cross‑correlation = similarity between two different series at various lags.
Linear vs. Non‑linear – linear methods assume proportional effects of past values; non‑linear methods handle chaotic or heteroskedastic behavior.
Parametric vs. Non‑parametric – parametric models (AR, MA, ARMA, etc.) are defined by a few parameters; non‑parametric approaches estimate covariance/spectrum directly.
Univariate vs. Multivariate – single‑series analysis vs. joint analysis of several series (e.g., VAR).
Classical linear models
AR(p): $Xt = \phi1 X{t-1} + \dots + \phip X{t-p} + \varepsilont$
MA(q): $Xt = \varepsilont + \theta1 \varepsilon{t-1} + \dots + \thetaq \varepsilon{t-q}$
ARMA(p,q) = AR(p) + MA(q)
ARIMA(p,d,q) – difference the series $d$ times to achieve (wide‑sense) stationarity, then apply ARMA(p,q).
Vector Autoregression (VAR) – extends AR to multiple series that influence each other.
AR‑X models – include exogenous inputs that affect the primary series but are not affected by it.
ARCH/GARCH – models where the variance changes over time (heteroskedasticity).
Time‑varying AR – coefficients $\phii(t)$ evolve with time, often estimated via Kalman filtering.
Frequency‑domain tools – Fourier transform (spectral analysis) and wavelet analysis decompose the series into frequency components.
🔄 Key Processes
Model Identification
Plot the series → look for trend/seasonality.
Test for stationarity (visual inspection, differencing).
Examine autocorrelation (ACF) and partial autocorrelation (PACF) to suggest AR, MA, or ARMA orders.
Differencing (if needed)
Apply first difference: $\Delta Xt = Xt - X{t-1}$.
Repeat until ACF/PACF patterns resemble a stationary process.
Parameter Estimation
Use maximum‑likelihood or least‑squares (built‑in in statsmodels, MATLAB).
Diagnostic Checking
Residuals should be white noise (no autocorrelation).
Ljung‑Box test or inspect residual ACF.
Forecasting
Generate $h$‑step ahead predictions using the fitted model.
For VAR, forecast all series simultaneously.
Time‑Varying Spectrum (Kalman filter) – recursively update AR coefficients as new data arrive to track non‑stationary spectral changes.
🔍 Key Comparisons
AR vs. MA
AR: current value = weighted sum of past values.
MA: current value = weighted sum of past shocks (error terms).
Parametric vs. Non‑parametric
Parametric: assumes a specific functional form (e.g., ARMA).
Non‑parametric: estimates covariance/spectrum directly, no fixed model.
Linear vs. Non‑linear
Linear: proportional response to past values (e.g., AR).
Non‑linear: response may change with magnitude or state (e.g., ARCH, NARX).
Univariate vs. Multivariate
Univariate: one series, models like ARIMA.
Multivariate: several series, models like VAR, VAR‑X.
Frequency‑domain vs. Time‑domain
Frequency: focus on periodic components (Fourier, wavelets).
Time: focus on lag relationships (autocorrelation, AR models).
⚠️ Common Misunderstandings
“All series must be differenced.” – Only non‑stationary series need differencing; over‑differencing destroys useful structure.
“High autocorrelation = good model.” – Strong autocorrelation in residuals indicates model misspecification, not model strength.
“AR and MA orders are interchangeable.” – They affect ACF/PACF differently; swapping them changes the implied dynamics.
“Non‑linear = always better.” – Non‑linear models add complexity; use only when linear diagnostics (e.g., residual heteroskedasticity, chaos) suggest it.
🧠 Mental Models / Intuition
“Memory” analogy – An AR(p) model “remembers” the last p observations; an MA(q) model “remembers” the last q random shocks.
Stationarity as “steady weather” – If the statistical climate (mean, variance) doesn’t change over time, the process is stationary; otherwise you need to “re‑calibrate” (difference or detrend).
Frequency view – Think of a song: Fourier analysis tells you which notes (frequencies) are playing; wavelets tell you when each note appears.
🚩 Exceptions & Edge Cases
Seasonally stationary series – periodic pattern repeats each season; may require seasonal differencing (e.g., SARIMA).
Fractionally integrated (ARFIMA) – integration order $d$ can be non‑integer, capturing long‑memory behavior.
Time‑varying coefficients – when dynamics evolve (e.g., changing market volatility), a static AR model may be inadequate; Kalman filter‑based TV‑AR can adapt.
Multivariate non‑stationarity – individual series may be non‑stationary but a linear combination is stationary (cointegration); not covered in the outline but worth noting as a boundary case.
📍 When to Use Which
Trend/seasonal pattern present? → Difference (regular or seasonal) or use a seasonal ARIMA.
Short‑range dependence only? → Simple AR(p) or MA(q) may suffice.
Both AR and MA features visible in ACF/PACF? → ARMA(p,q).
Long memory or fractional integration suspected? → ARFIMA (fractionally integrated).
Multiple interacting series? → VAR (or VAR‑X if exogenous inputs exist).
Variance changes over time? → ARCH/GARCH family.
Non‑linear dynamics (chaos, heteroskedasticity) observed? → Non‑linear ARX, NAR, or state‑space models.
Need spectral insight (periodic components, time‑localized frequencies)? → Fourier spectral analysis or wavelet transform.
👀 Patterns to Recognize
Slowly decaying ACF → Possible non‑stationarity (needs differencing).
Sharp cut‑off in PACF after lag p → AR(p) structure.
Sharp cut‑off in ACF after lag q → MA(q) structure.
Both ACF and PACF tail off → ARMA(p,q) candidate.
Seasonal spikes in ACF at multiples of the seasonal period → Seasonal component.
Increasing variance over time → Heteroskedasticity → consider ARCH/GARCH.
🗂️ Exam Traps
“If the series looks wavy, use a Fourier transform.” – Visual waviness alone doesn’t guarantee a dominant frequency; autocorrelation analysis may be more appropriate.
Choosing ARIMA order based only on ACF – Ignoring PACF can lead to wrong AR order.
Assuming differencing always improves a model. – Over‑differencing creates overdamped series and inflates forecast error.
Treating a non‑stationary series as stationary because the mean looks constant. – Check variance and autocovariance; hidden trends may still violate stationarity.
Confusing “exogenous” with “endogenous”. – In AR‑X models, exogenous inputs affect the target series but are not influenced by it; VAR treats all series as mutually endogenous.
---
Study tip: Memorize the ACF/PACF pattern table (cut‑off vs. tail‑off) and the ARIMA decision flow (stationarity → differencing → identify AR/MA orders). Those two visual cues solve the majority of exam questions on model selection.
or
Or, immediately create your own study flashcards:
Upload a PDF.
Master Study Materials.
Master Study Materials.
Start learning in seconds
Drop your PDFs here or
or