Numerical weather prediction - Introduction and Historical Development
Understand the fundamentals of numerical weather prediction, its forecast skill and limitations, and its historical development including ensemble methods and data assimilation.
Summary
Read Summary
Flashcards
Save Flashcards
Quiz
Take Quiz
Quick Practice
What does numerical weather prediction use to forecast weather from current observations?
1 of 15
Summary
Numerical Weather Prediction: Definition, Purpose, and Practice
What is Numerical Weather Prediction?
Numerical weather prediction (NWP) is the process of using mathematical models of the atmosphere and oceans—combined with current observational data—to forecast future weather. Unlike traditional forecasting methods that rely on pattern recognition or simple rules of thumb, NWP employs sophisticated computer simulations to solve the physical equations governing atmospheric motion.
The basic idea is straightforward: if we know the current state of the atmosphere (temperature, pressure, wind, humidity, etc.) and understand the physics that drives atmospheric change, we can mathematically simulate how the atmosphere will evolve over time. This process requires two essential ingredients: accurate observations of the current atmospheric state and computational power to solve complex mathematical equations.
NWP serves two primary purposes. Short-term forecasts (typically 1-14 days ahead) help predict day-to-day weather changes that affect daily life, agriculture, transportation, and emergency management. Long-term climate predictions (weeks to seasons ahead) help identify broader climate patterns and anomalies, though with lower spatial detail and reliability.
The diagram above shows how modern weather models are structured: observations are fed into a model that includes both horizontal (latitude-longitude) and vertical (height/pressure) grids. The model then simulates physical processes occurring in both the atmosphere and ocean to generate predictions.
Why Powerful Computers Are Essential
The computational demands of NWP are enormous. A global weather model must:
Discretize the entire Earth into a three-dimensional grid (often with millions of grid points)
Calculate physical processes at each grid point
Perform these calculations repeatedly as the forecast advances in time
Process vast amounts of observational data to establish initial conditions
This requires supercomputers capable of performing trillions of calculations per second. Without such computational resources, weather prediction at useful spatial scales (such as predicting local thunderstorms) would be impossible.
<extrainfo>
Historically, early numerical weather prediction experiments in the 1950s used the ENIAC computer, which was room-sized but had computational power far less than a modern smartphone. The development of weather forecasting essentially paralleled advances in computing technology.
</extrainfo>
Understanding Forecast Skill and Limitations
The Practical Limit: Six Days for Deterministic Forecasts
Modern numerical weather prediction models can generate skillful deterministic forecasts (single, definite predictions) for approximately six days into the future. This means that after about 144 hours, the model's predictions become less reliable than simpler statistical methods, even with perfect observations and a perfect model.
Beyond six days, forecast skill degrades rapidly. This limitation exists not because our computers aren't powerful enough or our models are fundamentally flawed, but rather because of a fundamental property of atmospheric physics.
The Chaos Problem: Why Errors Grow Exponentially
The equations governing atmospheric motion are chaotic. Chaos, in the mathematical sense, means that tiny differences in initial conditions can lead to dramatically different outcomes—a concept famously illustrated by the "butterfly effect." In the context of weather, this means that even microscopic errors in our initial observations will grow exponentially as the forecast progresses.
Research has shown that forecast errors roughly double approximately every five days. This exponential growth means:
Day 1-2: Very small errors; forecast is quite accurate
Day 3-5: Errors become noticeable; forecast remains useful
Day 6-10: Errors grow substantially; forecast skill diminishes
Day 14+: Errors are so large that deterministic forecasts lose most of their value
This five-day doubling time creates a natural limit of about 14 days for any theoretically useful deterministic weather forecast, regardless of model perfection.
Factors Affecting Forecast Accuracy
Even within the six-day skillful range, forecast accuracy varies based on:
Observation density: Regions with more observations (like heavily populated areas with dense station networks) produce more accurate forecasts than data-sparse regions (like oceans or remote areas)
Observation quality: Accurate, well-calibrated instruments yield better initial conditions
Model deficiencies: No model perfectly represents all atmospheric processes; systematic biases in the model reduce forecast skill
Post-Processing and Uncertainty: Going Beyond Single Forecasts
Because deterministic forecasts have inherent limitations, meteorologists have developed two powerful techniques to improve forecast utility and communicate uncertainty.
Model Output Statistics (MOS)
Model output statistics (MOS) is a statistical post-processing technique applied after the model has run. The basic idea is that atmospheric models have predictable systematic errors—they might consistently overestimate temperature in certain locations, for example, or underestimate precipitation.
MOS techniques work by:
Comparing historical model forecasts with what actually occurred
Identifying systematic biases and local effects
Developing regression equations that correct the model output
Applying these corrections to new forecasts
For example, a MOS system might learn that "when the model predicts 40°F at location X, the actual temperature is typically 43°F" and therefore automatically adjust future predictions. MOS can also account for local effects that models might miss, such as how a specific valley modifies wind patterns.
Ensemble Forecasting: Estimating Uncertainty
Rather than running a single forecast, ensemble forecasting runs multiple simulations—often dozens or even hundreds—each with slightly different starting conditions or slightly different model physics. This approach directly addresses the chaos problem by asking: "If I start with slightly different but equally plausible initial conditions, what range of outcomes do I get?"
The resulting ensemble generates not just a single forecast, but a distribution of possible outcomes. This allows meteorologists to:
Estimate forecast uncertainty: How confident should we be in this prediction?
Identify unlikely but possible extreme scenarios
Communicate probabilistic information to decision-makers
Extend the useful forecast period beyond the deterministic limit
For instance, an ensemble might show that there's a 70% probability of rain tomorrow, but a 15% probability of heavy snow if a particular upper-level disturbance develops. This probabilistic information is often more useful than a single deterministic forecast.
Historical Development of Numerical Weather Prediction
The Beginning: Translating Physics into Computation (1956 onward)
The first successful numerical weather prediction experiment occurred in 1956 when the first general circulation model (GCM) of the entire atmosphere was run as a numerical experiment. This wasn't yet an operational forecast system, but rather a proof-of-concept: researchers had successfully translated the fundamental equations of atmospheric physics into computer code and demonstrated that the resulting simulation reproduced realistic atmospheric behavior.
These early experiments used the ENIAC computer and other early machines that, by modern standards, had trivial computing power. Yet scientists demonstrated that you could mathematically simulate weather. This foundational work established that NWP was not merely a theoretical possibility but a practical reality.
<extrainfo>
The fact that these early experiments worked at all was somewhat surprising at the time. Some atmospheric scientists doubted whether the chaotic nature of weather would make numerical prediction impossible. The successful 1956 experiment proved skeptics wrong.
</extrainfo>
Expanding the Models: Oceans, Regions, and Practical Surfaces
As computing power increased through the 1960s and 1970s, weather models evolved in several important directions:
Coupled Atmosphere-Ocean Models
In the late 1960s, researchers at the NOAA Geophysical Fluid Dynamics Laboratory developed general circulation models that combined both ocean and atmosphere physics. These coupled models could simulate how ocean temperatures influence atmospheric circulation and vice versa. This advance was crucial for understanding longer-timescale phenomena like El Niño and other climate patterns that result from ocean-atmosphere interactions.
Limited-Area (Regional) Models
Throughout the 1970s, scientists developed limited-area or regional models that focused on smaller geographic domains rather than the entire globe. Regional models have several advantages:
Higher resolution: By covering a smaller area, they can use finer grid spacing and simulate smaller-scale weather features like thunderstorms and local wind patterns
Better local detail: They can represent local terrain (mountains, coastlines) more realistically
Practical applications: They enabled better forecasts for specific regions
Limited-area models dramatically improved the skill of tropical cyclone track forecasts and air-quality predictions, as they could resolve the mesoscale features that determined storm movement and pollution transport.
<extrainfo>
This image shows an example of model track forecasts for a tropical cyclone. Different models (shown as different colored lines) produce different track predictions. Modern ensemble systems would run multiple variations to capture this uncertainty.
</extrainfo>
Model Output Statistics Becomes Operational
By the 1970s-1980s, the MOS technique described earlier became operationally implemented. Forecasters realized that raw model output could be systematically improved through statistical post-processing, and they developed automated systems to apply these corrections operationally.
Modern Era: Ensemble Methods and Data Assimilation (Late 1990s onward)
Ensemble Forecasting Becomes Operational
Ensemble forecasting, which had been explored conceptually in research, became operationally implemented in the late 1990s. The National Centers for Environmental Prediction (NCEP) introduced ensemble forecasting using a "breeding method"—a technique for generating multiple slightly different initial conditions that capture the main sources of uncertainty.
Around the same time, the European Centre for Medium-Range Weather Forecasts (ECMWF) created a comprehensive ensemble prediction system. These operational ensembles fundamentally changed how meteorologists think about forecasts: rather than viewing a forecast as a single answer, meteorologists now view it as a probability distribution.
<extrainfo>
The breeding method works by running the model, comparing it to observations, identifying differences, rescaling these differences to match the uncertainty in the observations, and using these scaled differences to create ensemble members. This is more computationally efficient than other methods for generating ensemble spread.
</extrainfo>
Advanced Data Assimilation
A critical advance came with the implementation of sophisticated data assimilation techniques. Data assimilation is the process of optimally combining observations with the previous model forecast to create the best possible initial conditions for the next forecast.
The Weather Research and Forecasting (WRF) model, widely used for research and operational forecasting, incorporated variational data assimilation systems. These systems use mathematical optimization techniques to adjust the model state to fit observations while respecting the physical constraints and uncertainties in both the observations and the previous forecast.
Data assimilation is more sophisticated than simply inserting observations into the model because:
Observations have measurement errors that must be accounted for
Different observation types have different error characteristics
The model has its own biases and errors
Physical constraints must be maintained (e.g., the wind field must be dynamically consistent)
Variational methods solve an optimization problem to find the best compromise between fitting the observations and maintaining physical consistency.
Continuing Evolution in Observational Data
Throughout this period, radiosonde observations—weather balloons that carry instruments high into the atmosphere—have remained extensively used for assimilating surface and upper-air data. Radiosondes provide vertical profiles of temperature, humidity, and wind, which are essential for initializing weather models. They remain a gold standard for upper-air observations despite the advent of satellite data.
<extrainfo>
Aircraft observations, like those shown in this image, are another important source of data assimilated into weather models. Commercial aircraft continuously measure temperature and wind as they cruise, providing valuable real-time data.
</extrainfo>
<extrainfo>
In the late 1960s, researchers began exploring stochastic dynamic prediction concepts—ways to represent model uncertainties explicitly in the forecast equations themselves, rather than just running multiple simulations. This theoretical work laid groundwork for modern ensemble and uncertainty quantification approaches.
</extrainfo>
Summary
Numerical weather prediction has evolved from a theoretical curiosity in 1956 to a sophisticated, operationally essential system that blends advanced mathematics, physics, massive observational data streams, and supercomputing power. Understanding both its capabilities (skillful forecasts to about 6 days) and its limitations (chaotic growth of errors) is essential for interpreting forecasts correctly. Modern techniques like ensemble forecasting and variational data assimilation continue to push the boundaries of what's predictable, though the fundamental limits imposed by atmospheric chaos remain.
Flashcards
What does numerical weather prediction use to forecast weather from current observations?
Mathematical models of the atmosphere and oceans.
What two types of predictions can numerical weather prediction models generate?
Short-term weather forecasts and long-term climate predictions.
What is the typical limit of forecast skill for modern deterministic forecasts?
About six days.
Why are reliable forecasts generally limited to about 14 days?
The chaotic nature of the governing partial differential equations causes errors to double roughly every five days.
What three factors determine the accuracy of a weather forecast?
Observation density
Observation quality
Model deficiencies
What is the purpose of applying Model Output Statistics (MOS) after a model run?
To correct systematic model errors and local effects.
During which period were Model Output Statistics introduced to relate model fields to surface weather?
The 1970s–1980s.
How does ensemble forecasting estimate uncertainty and extend useful forecast periods?
By creating multiple simulations with varied initial conditions or model physics.
Which organization introduced ensemble forecasting using a breeding method in the late 1990s?
National Centers for Environmental Prediction (NCEP).
What type of forecasts are produced by the ensemble prediction system at the European Centre for Medium-Range Weather Forecasts (ECMWF)?
Probabilistic forecasts.
In what year was the first general circulation model of the atmosphere presented as a numerical experiment?
1956.
Which early computer was used to perform the initial integrations for numerical weather prediction?
ENIAC.
What were the primary improvements offered by the regional (limited-area) models that emerged in the 1970s?
Improved tropical cyclone track forecasts and air-quality predictions.
Which specific model incorporated a variational data assimilation system for improved initial conditions?
The Weather Research and Forecasting (WRF) model.
What observational source has been used extensively for surface and upper-air data assimilation?
Radiosonde observations.
Quiz
Numerical weather prediction - Introduction and Historical Development Quiz Question 1: Which organization first introduced ensemble forecasting using a breeding method in the late 1990s?
- National Centers for Environmental Prediction (correct)
- European Centre for Medium‑Range Weather Forecasts
- Weather Research and Forecasting (WRF) model developers
- United States Air Force
Which organization first introduced ensemble forecasting using a breeding method in the late 1990s?
1 of 1
Key Concepts
Weather Prediction Techniques
Numerical weather prediction
Ensemble forecasting
Data assimilation
Model output statistics
Climate Modeling
General circulation model
Weather Research and Forecasting model
Meteorological Tools and Concepts
Supercomputing in meteorology
Predictability limit
Radiosonde
Definitions
Numerical weather prediction
The use of mathematical models and computer simulations to forecast atmospheric conditions based on current observations.
General circulation model
A comprehensive climate model that simulates the large‑scale movement of air and heat in the Earth’s atmosphere and oceans.
Ensemble forecasting
A technique that runs multiple model simulations with varied initial conditions or physics to estimate forecast uncertainty.
Data assimilation
The process of integrating observational data into a numerical model to produce an optimal estimate of the atmospheric state.
Model output statistics
Statistical methods applied to raw model output to correct systematic biases and improve local forecast accuracy.
Supercomputing in meteorology
The deployment of high‑performance computing systems to process massive datasets and perform complex weather model calculations.
Predictability limit
The theoretical horizon (about 10–14 days) beyond which small errors in initial conditions grow exponentially, limiting forecast reliability.
Weather Research and Forecasting model
A widely used, flexible atmospheric simulation system that supports research and operational forecasting.
Radiosonde
A balloon‑borne instrument package that measures temperature, humidity, pressure, and wind to provide vertical atmospheric profiles.