Bounded rationality - Psychological Foundations and Models
Learn how bounded willpower and selfishness shape decisions, how heuristics and framing influence choices, and how technology expands the limits of rationality.
Summary
Read Summary
Flashcards
Save Flashcards
Quiz
Take Quiz
Quick Practice
What does the term hyperbolic discounting describe in human behavior?
1 of 7
Summary
Understanding Bounded Rationality in Human Behavior
Introduction
Classical economics assumes that people make perfectly rational decisions by carefully weighing all available information and making choices that maximize their personal well-being. However, real human behavior often diverges from this ideal. Bounded rationality is the concept that people's decision-making is limited by the information available to them, their cognitive abilities, and the time they have to decide. Rather than being perfectly rational calculators, humans rely on mental shortcuts, struggle with self-control, and care about fairness and others' welfare. This section explores the key ways that human behavior is bounded and what this means for how we actually make decisions.
Bounded Willpower: The Problem of Self-Control
Understanding the Challenge
One of the most fundamental ways that human rationality is bounded is through limited self-control. People consistently struggle to follow through on their long-term plans because immediate desires often override carefully considered intentions. You might plan to save money for retirement, exercise regularly, or study consistently—yet in the moment, the temptation to spend, relax, or procrastinate proves overwhelming.
This is not a simple lack of willpower in the sense of personal weakness. Rather, willpower itself is a limited resource. When you exercise self-control in one area (resisting a tempting snack), you have less capacity for self-control in another area (resisting the urge to check your phone). This is why dieting becomes harder later in the day, or why it's harder to stick to a budget when you're tired.
Hyperbolic Discounting: The Mathematics of Impatience
A key pattern in how people's self-control fails is captured by the concept of hyperbolic discounting. This describes how people value immediate rewards far more heavily than delayed rewards—but in a non-consistent way.
Consider a simple example: If offered $100 today or $110 in one week, most people choose $100 today. But if offered $100 in 52 weeks or $110 in 53 weeks, most people choose $110. Rationally, these should produce the same preference—in both cases, you're choosing between money now versus money plus $10 later, separated by one week.
Yet people behave inconsistently. The difference is psychological: waiting one week feels unbearable when it's immediate, but acceptable when it's far in the future. Formally, hyperbolic discounting means that the way people discount the future follows a hyperbolic curve rather than an exponential one, causing impatience to decline as time grows longer.
This explains many real-world behaviors:
Choosing quick gratification (fast food, entertainment) over long-term health
Procrastinating on important tasks until the deadline becomes urgent
Taking on high-interest debt for immediate consumption
The practical implication is that people's preferences are not time-consistent. They plan one way but act differently when the moment arrives.
Bounded Selfishness: Social Preferences and Fairness
Beyond Pure Self-Interest
A second major way that human rationality is bounded challenges a different assumption from classical economics: that people care only about themselves. Research consistently shows that people have social preferences—they care about fairness, reciprocity, and others' welfare, even when it costs them personally.
Consider a simple experiment: In a dictator game, one person (the dictator) is given $10 and must decide how to split it with another person. If people were purely self-interested, the dictator would keep all $10 and give nothing. Yet most dictators give a substantial share to the other person—often 40-50% of the money—even though they receive nothing in return.
This reveals that people have built-in concerns about:
Fairness: People dislike unequal outcomes and often reject unfair divisions even when it costs them
Reciprocity: People are willing to reward those who cooperate and punish those who don't, sometimes at significant personal cost
Altruism: People sometimes act generously toward others without expectation of return
The Limits of Concern for Others
However, selfishness is not entirely absent—it is merely bounded. The scope of people's concern for others is limited and depends heavily on context. Several factors constrain social preferences:
In-group favoritism: People care more about members of groups they belong to (their nationality, religion, company, or sports team) than about outsiders
Emotional distance: People care more about those they know personally than distant strangers; helping a local charity feels more pressing than addressing distant suffering
Salience and visibility: When others' needs are visible and immediate (a homeless person asking for money), people are more generous than when needs are abstract (global poverty statistics)
This explains why you might donate to a local food bank but feel less compelled to prevent deaths from preventable diseases in distant countries—even though the latter might do more good overall.
The Foundation: Kahneman and Tversky's Breakthrough
Before we explore how people actually make decisions, it's essential to understand the researchers who mapped the psychological reality of human choice. Daniel Kahneman and Amos Tversky revolutionized the study of decision-making by demonstrating systematically that people rely on heuristics—mental shortcuts—rather than careful calculation.
Their research revealed three critical insights:
Heuristics of judgment: People estimate probabilities and frequencies using mental shortcuts that often work but can lead to predictable errors. For example, people estimate the frequency of events based on how easily examples come to mind (the "availability heuristic"), which can lead them to overestimate vivid but rare dangers.
Emotional influences: Emotions like fear, enthusiasm, and personal preference heavily shape economic decisions, rather than cold rational calculation. A frightening news story about plane crashes can make people fear flying more than driving, despite driving being statistically more dangerous.
Decision procedures matter: The way choices are presented—the "frame"—fundamentally affects decisions, not just the underlying facts.
Their work established that bounded rationality is not a minor deviation from ideal behavior; it's the norm. Understanding human decision-making requires understanding psychology, not just mathematical optimization.
Risky Choice and Framing Effects
How Presentation Changes Preferences
One of Kahneman and Tversky's most important discoveries is the framing effect: the same objective choice produces different decisions depending on how it's presented. Consider this classic example:
Imagine the government is responding to an outbreak of disease expected to kill 600 people. Two programs are proposed:
Program A (certain outcome): 200 people will be saved for sure.
Program B (risky outcome): There's a 1/3 chance 600 people will be saved, and a 2/3 chance nobody will be saved.
Most people choose Program A—they prefer the certain outcome.
Now consider the same programs framed in terms of losses:
Program A (certain outcome): 400 people will die for sure.
Program B (risky outcome): There's a 1/3 chance nobody will die, and a 2/3 chance all 600 will die.
Now most people choose Program B—they prefer to take the risk!
The objective outcomes are identical (saving 200 people = 400 dying), yet the framing as "gains" versus "losses" reverses preferences. This is not irrational in the sense of logical contradiction; rather, it reveals that people's preferences depend on the reference point (what they consider the baseline). When framed as gains from a baseline of "all die," people are risk-averse. When framed as losses from a baseline of "all survive," people become risk-seeking.
Loss Aversion: Why Losses Loom Larger
The reason for this reversal is loss aversion: the psychological principle that potential losses weigh more heavily on our minds than equivalent gains. Losing $100 feels worse than gaining $100 feels good. This asymmetry is not about logic; it's about how our brains are wired.
Loss aversion explains many economic behaviors:
People hold losing stocks longer than winning stocks, hoping to break even
Workers resist wage cuts more fiercely than they pursue equivalent wage increases
Consumers feel more pain from a price increase than pleasure from an equal price decrease
Understanding loss aversion is critical because it means that how a choice is presented—as a potential gain or a potential loss—can completely reverse a person's decision, even when the underlying situation is identical.
Decision Procedures: How the Process Shapes the Outcome
Why Procedures Matter
A crucial insight from economist Ariel Rubinstein is that bounded rationality involves not just information limitations, but also the specific procedures and rules people use to make decisions. Here's the key idea: two decision-makers with access to identical information can reach different conclusions if they follow different decision procedures.
To illustrate, imagine two car shoppers both considering buying a particular car. They both have access to the same safety ratings, price information, and fuel economy data. Yet:
Shopper A uses a lexicographic procedure: first eliminate all cars below a safety threshold, then among remaining cars, choose the cheapest
Shopper B uses a weighted-average procedure: assign scores to safety, price, and other factors, then choose the car with the highest total score
Even with identical information, these different procedures might lead them to different cars. The decision procedure itself—not just the information—shapes the outcome.
Rubinstein argued that consistency of decisions depends on the procedural rules, not only on the information set. This fundamentally challenges the classical economic view that rationality is merely about having the right information and doing the math correctly. In the real world, the rules you follow matter.
This is why organizations care about decision procedures: they standardize how choices are made, making outcomes more predictable and consistent.
Simple Heuristics and the Adaptive Toolbox
Heuristics as Smart, Not Foolish
A common misunderstanding about bounded rationality is that it's simply a weakness—people can't think through problems carefully, so they use shortcuts that lead them astray. Psychologist Gerd Gigerenzer challenged this view with evidence that simple heuristics can actually outperform complex optimization in many real-world environments.
Gigerenzer introduced the concept of the adaptive toolbox: the repertoire of fast-and-frugal heuristics that people develop and use. These heuristics are "fast" (quick to execute), "frugal" (requiring little information), yet effective at producing good decisions in environments where people actually live and work.
How Simple Rules Beat Complex Calculations
Here's a striking example: A simple heuristic for investing is the "1/N rule"—divide your money equally among N investment options available. This requires no complex calculation, no forecasting of market movements. Yet when tested against sophisticated portfolio optimization models, the 1/N rule performs surprisingly well, especially when markets are volatile and forecasts are uncertain.
Why? Because the simple rule exploits a regularity in the environment: diversification across many options provides protection without requiring accurate predictions about which assets will perform best. The heuristic works not because it performs complex analysis, but because it's well-matched to the actual problem structure.
The key principle is that simple heuristics work by exploiting regularities in the environment, not by performing comprehensive analysis. When the environment changes—when regularities shift—the heuristic may fail. But in stable environments, the heuristic can be more robust than elaborate optimization procedures that overfit to noise in the data.
This suggests that bounded rationality is not always a limitation. Given limited time and computational resources, using the right simple heuristic can produce better real-world decisions than theoretically optimal procedures.
<extrainfo>
Technological Frontiers: Expanding the Bounds of Rationality
As technology advances, the boundaries of feasible rationality expand. Moore's law—the observation that computing power roughly doubles every two years—means that calculations that were infeasible decades ago are now routine. Artificial intelligence and big-data analytics can now process vast amounts of information and identify patterns that human minds cannot.
These advances extend what is computationally possible for decision-making. Where bounded rationality once constrained a business's ability to analyze markets, machine learning now processes terabytes of customer data. Where human judgment was once limited by memory and calculation speed, algorithms now optimize across millions of variables.
However, this doesn't eliminate bounded rationality entirely—it shifts the constraint. Even with vast computing power, fundamental tradeoffs remain between the complexity of analysis and the availability of clean, reliable data. The concept of bounded rationality remains relevant; technology simply expands the feasible rationality space.
</extrainfo>
Flashcards
What does the term hyperbolic discounting describe in human behavior?
The tendency to value immediate rewards disproportionately more than delayed rewards.
How do framing effects influence a person's choices regarding the same outcome?
Choices differ depending on whether the outcome is presented as a gain or a loss.
What is the core principle of loss aversion in decision-making?
Potential losses weigh more heavily than equivalent gains.
According to Ariel Rubinstein’s models, why can identical information yield different outcomes?
Because outcomes depend on the specific decision-making procedures agents follow.
In Rubinstein’s view, what determines the consistency of final decisions?
Procedural rules (rather than only the information set).
What did Gerd Gigerenzer define as the "adaptive toolbox"?
A repertoire of fast-and-frugal heuristics.
How can simple heuristics outperform complex optimization in certain environments?
By exploiting regularities in the environment under resource constraints.
Quiz
Bounded rationality - Psychological Foundations and Models Quiz Question 1: Why do potential losses influence decisions more strongly than equivalent gains?
- Because of loss aversion, where losses weigh heavier than gains (correct)
- Because gains are perceived as less likely to occur
- Because individuals are generally risk‑seeking in all contexts
- Because of framing effects that only apply to gains
Why do potential losses influence decisions more strongly than equivalent gains?
1 of 1
Key Concepts
Decision-Making Biases
Bounded rationality
Hyperbolic discounting
Loss aversion
Framing effect
Bounded willpower
Heuristics and Preferences
Adaptive toolbox
Bounded selfishness
Social preferences
Procedural rationality
Fast‑and‑frugal heuristics
Definitions
Bounded rationality
A theory that human decision‑making is limited by cognitive constraints, information, and time, leading to satisficing rather than optimal choices.
Hyperbolic discounting
The tendency to prefer smaller, immediate rewards over larger, delayed ones, with discount rates decreasing over time.
Loss aversion
The psychological bias where losses are perceived as more painful than equivalent gains are rewarding.
Framing effect
The influence of how equivalent choices are presented (as gains or losses) on people's decisions.
Adaptive toolbox
A concept describing a set of simple, fast‑and‑frugal heuristics that individuals use to make effective decisions in varied environments.
Fast‑and‑frugal heuristics
Simple decision rules that ignore information to achieve quick, resource‑efficient judgments, often performing well under uncertainty.
Bounded willpower
The limited capacity for self‑control that causes people to favor short‑term desires over long‑term plans.
Bounded selfishness
The restricted scope of concern for others, shaped by factors like in‑group bias and emotional distance.
Social preferences
Preferences that incorporate fairness, reciprocity, and concern for others, deviating from pure self‑interest.
Procedural rationality
An approach emphasizing the decision‑making process and rules used, rather than just the information, in determining outcomes.