RemNote Community
Community

Attention - Models and Resource Theories

Understand the key attention models, the brain networks and resource theories that explain how attention is allocated, limited, and affected by perceptual load.
Summary
Read Summary
Flashcards
Save Flashcards
Quiz
Take Quiz

Quick Practice

What does the early filter model propose happens to sensory input before semantic processing?
1 of 15

Summary

Understanding Models and Theories of Attention Introduction Attention is one of the most fundamental aspects of human cognition. It allows us to selectively process important information from the overwhelming stream of sensory input we constantly receive. But how does this selection actually work? Does attention filter out irrelevant information before or after we process its meaning? Can we attend to multiple things at once, or do we have limited "resources" that must be divided? These questions have driven decades of research in cognitive psychology, producing several competing models and theories. Understanding these different perspectives—and how they've evolved—will give you insight into how attention actually functions in the brain and behavior. The models you'll learn build on each other, with later theories addressing limitations in earlier ones. The Selection Filter Debate: Where Does Attention Block Information? The most fundamental question in attention research is when selection occurs. Does attention filter out irrelevant information early in processing—before we even identify what it is—or does it operate late, after we've already analyzed the meaning? Three major models address this question. The Early Filter Model The early filter model proposes that sensory input undergoes very rapid, automatic processing in a pre-attentive stage. All sensory information first enters a sensory store (like a very brief memory buffer), where a filter examines the physical characteristics of incoming stimuli—things like color, pitch, location, or loudness. The key idea: This filter selects information based only on these simple physical features, before the brain even processes what the information means. Unattended information is completely blocked and doesn't receive any semantic (meaning-based) processing. Think of it like a security checkpoint: the filter quickly scans each piece of incoming information and decides whether to let it through based on superficial features, without considering the deeper content. This model explains why you can follow a conversation at a crowded party by focusing on someone's voice location and tone—you filter based on those physical characteristics rather than listening to everyone's words and then choosing which to understand. The Attenuation Model The attenuation model emerged because early filter theory couldn't explain certain findings. In one famous experiment, researchers had participants listen to two different messages through headphones—one in each ear. Participants were instructed to "shadow" (repeat back) only one message. The early filter model would predict they hear nothing from the unattended ear. But that's not what actually happens. The attenuation model suggests something more nuanced: unattended information isn't completely blocked. Instead, it's weakened or "attenuated"—like turning down the volume on a radio rather than turning it off completely. This weakened information still receives some processing. The crucial finding: Highly salient (important or personally relevant) information from the unattended channel can still "break through" and reach awareness. The most famous example is the cocktail party effect: at a noisy party, you're focusing on one conversation, but you immediately notice if someone says your own name from across the room. Your unattended name manages to reach awareness despite competing information. The attenuation model explains this by proposing that while unattended information gets reduced processing, words with high personal significance (like your name) have a lower threshold for reaching awareness and can "punch through" the attenuation. The Late Selection Model The late selection model argues for the opposite timing: all inputs undergo full semantic analysis, and selection happens much later—at the point of response or conscious awareness. According to this model, when you're at a party, you actually process the meaning of many conversations around you, but you only become consciously aware of the one you're attending to. Selection isn't about filtering before analysis; it's about filtering before consciousness. This model explains the cocktail party effect differently than attenuation theory: your name is processed semantically like everything else, but it's important enough to reach conscious awareness and influence your response. Why this matters: The question of when selection occurs isn't just theoretical—it reveals fundamental truths about whether our brains process information we're not consciously aware of. Late selection suggests the brain continuously analyzes far more than we realize; early filter theories suggest we're more efficient, blocking unnecessary processing earlier. The evidence suggests both mechanisms operate in different contexts, and the answer may depend on factors like perceptual load (discussed later), which we'll explore more fully. Spatial Models: How Attention Moves Through Space While the filter debate focuses on when selection occurs, spatial models focus on where attention goes and how it behaves in physical space. The Spotlight Model The spotlight model likens attention to a movable beam of light. Just as a spotlight can be directed to illuminate a particular region of a stage, attention can be directed to a region of the visual field. Within the spotlight—the "attended region"—processing is enhanced and detailed. Outside the spotlight, in peripheral areas, processing is coarser and less detailed. The spotlight is: Movable: You can voluntarily direct it (like looking at a specific point) or it can be pulled by salient stimuli (like a sudden movement) Limited in size: The attended region enhances processing, but this enhancement is spatially limited Flexible in speed: The spotlight can move from one location to another, though this takes time This model captures something intuitive: when you focus on one area, you process it better than peripheral areas. However, it has a limitation—it assumes attention is essentially an all-or-nothing phenomenon within a fixed-sized region, which isn't always accurate. The Zoom-Lens Model The zoom-lens model extends the spotlight metaphor by adding an important feature: the ability to adjust the size of the attended region. Attention can zoom in (narrow focus) or zoom out (broad focus), just like a camera lens or telescope. How the zoom varies: Narrow focus (zoomed in): Attention concentrates on a small area. Processing is very detailed and efficient for that region, but you're blind to events elsewhere Broad focus (zoomed out): Attention spreads over a larger region. You can process more areas, but because resources are distributed across a wider region, processing efficiency decreases for any given location The crucial trade-off: There's a cost to broad attention. When you spread attention widely, you sacrifice processing depth and efficiency. This is why it's harder to notice subtle details when you're trying to monitor a large area. This model explains real-world phenomena: a defensive player in football might keep broad attention to monitor a large area of the field, while a surgeon might zoom in to focus intensely on a small area of tissue. Different tasks create different attentional requirements. Feature Integration Theory: How Do We Bind Features Together? So far, we've discussed where and when attention operates. But Feature Integration Theory (FIT) addresses a different question: how does the visual system combine individual features into complete objects? The Basic Principle Feature Integration Theory proposes that visual perception operates in two stages: Stage 1 (Automatic, Parallel Processing): The brain automatically registers basic visual features across the entire visual field simultaneously and without attention. These features include color, orientation, motion, size, and other simple visual properties. Crucially, this stage doesn't require attention—it happens in parallel for many features at once. Stage 2 (Serial, Attention-Dependent Processing): To identify complete objects and bind features together, the visual system requires focused attention. Attention acts like a "glue" that binds features together into coherent objects. This stage is serial (one object or location at a time) and requires attentional resources. Why This Matters: Visual Search and Binding Feature Integration Theory elegantly explains visual search phenomena. When you search for a red circle among blue circles, you can find it instantly—this is a "pop-out" effect where the target's feature (red) is detected automatically in parallel. But when you search for a red circle among red triangles and blue circles, you must check each object individually, because you need to bind color AND shape to identify the target. This theory also explains illusory conjunctions—when attention is overloaded, people sometimes report seeing features in wrong combinations (like seeing a red triangle when actually seeing a red square and blue triangle). This happens because the parallel feature stage detected the features, but the binding stage (which requires attention) failed to correctly combine them. <extrainfo> Feature Integration Theory remains influential, though modern research shows some modifications are needed. For instance, some features may require attention even for simple detection, and the boundary between "automatic" and "attention-dependent" features isn't as clean as originally proposed. However, the core insight—that basic features are processed in parallel and attention binds them into objects—remains a cornerstone of visual attention research. </extrainfo> Brain Networks: The Three-Network Model of Attention While earlier models focused on behavior and information processing, neuroscience has revealed that attention involves distinct brain systems, each serving different functions. The Three-Network Model describes how the brain implements attention through three separable networks: The Alerting Network The alerting network achieves and maintains a state of readiness or vigilance—the general state of preparedness to respond to events. Key features: Function: Gets you "ready" to detect and respond to stimuli Brain regions: Right frontal and parietal cortex Neurotransmitter: Modulated by norepinephrine, a neurotransmitter associated with arousal and attention Behavioral signature: When this network is active, you're primed to respond; when it's inactive, reaction times slow and you miss more stimuli Think of the alerting network like the general state of attentiveness you maintain before class begins—you're ready to pay attention, even though nothing specific has drawn your attention yet. The Orienting Network The orienting network directs attention toward specific stimuli—it's what makes attention move and focus. Key features: Function: Shifts attention to behaviorally relevant locations or stimuli Brain regions: Frontal eye fields (involved in directing eye movements) and parietal cortex (creates saliency maps that indicate which locations are important) Can be voluntary or involuntary: You can deliberately orient toward something you're looking for, or you can be automatically pulled toward a sudden movement or salient stimulus The orienting network is like pointing your spotlight (from the spotlight model) in a particular direction based on what you want to find or what catches your interest. The Executive Control Network The executive control network (also called the executive attention network) resolves conflicts between competing responses and maintains focus despite distractions. Key features: Function: When multiple possible responses compete (like when you see a distractor or have conflicting task demands), this network selects the appropriate response Brain regions: Anterior cingulate cortex (ACC) detects conflict; dorsolateral prefrontal cortex (DLPFC) implements control and maintains task goals Neurotransmitter: Involves dopamine and other neurotransmitters related to goal-directed behavior A concrete example: You're driving and see a red traffic light (automatic response: brake), but you're also trying to remember directions (goal: continue forward to find your street). The executive network resolves this conflict and ensures you brake, maintaining your safety goal despite the competing attention demands. Resource Theories: The Limited Capacity Problem All the models so far describe how attention works, but they don't fully address a fundamental question: does attention have limits, and if so, what kind of limits? Kahneman's Single-Pool Model Daniel Kahneman proposed an influential answer: attention operates like a central pool of limited resources that can be flexibly allocated across different tasks and demands. The key ideas: Limited capacity: There's only so much "attentional stuff" available at any moment Flexible allocation: You can distribute these resources however you want—concentrating them all on one task, or dividing them among multiple tasks Effort: Deploying and allocating resources requires effort, and more demanding tasks require more resources Performance trade-offs: If you divide resources, each task gets fewer resources and performance on each suffers (this is why multitasking is difficult) This model explains why it's hard to have two intense conversations at once—you're trying to divide a limited resource pool between two demanding tasks, so each gets incomplete attention. Modality-Specific Resource Models However, not all interference in dual-task situations comes from a general resource limit. Modality-specific resource models propose that some resources are specific to particular sensory modalities or processing types. The key insight: Interference is typically stronger when tasks share the same sensory modality than when they use different modalities. For example: Reading while listening to an audiobook is harder than reading while listening to music (both reading and audiobook are language-processing tasks) Listening to music while on a phone call is harder than listening to music while doing math (phone is auditory-language, math is visual-spatial) This suggests the brain has separate resources for different types of processing, not just one central pool. You might do better juggling multiple tasks if they use different processing pathways. <extrainfo> The truth is probably that both mechanisms operate. There's some evidence for a general attentional resource pool (as Kahneman proposed), but also clear evidence for modality-specific limitations. When tasks compete for the same sensory modality or processing mechanism, interference is particularly strong. </extrainfo> Perceptual Load Theory: Integrating Selection and Resources So far, we've discussed early vs. late selection as competing theories, and resource limitations as separate from selection questions. Perceptual Load Theory brilliantly integrates these ideas by proposing that the amount of perceptual load determines whether selection occurs early or late. The Core Principle Lavie (1995) proposed that the visual system has a fixed amount of processing capacity. How attention allocates this capacity depends on the current perceptual load of the task: When perceptual load is HIGH (the task is complex and demands much processing): The attended task consumes most available capacity Early selection occurs: Resources are exhausted on the task, leaving little to process irrelevant stimuli Distractors are filtered out before they're fully processed Inattentional blindness (failing to notice unexpected objects) increases When perceptual load is LOW (the task is simple and demands little processing): The attended task leaves spare capacity available Late selection occurs: Remaining capacity is used to process irrelevant stimuli Distractors are analyzed more fully and can interfere with task performance You're more likely to notice unexpected objects Supporting Evidence Lavie and Tsal (1994) demonstrated this directly. Participants performed a visual search task (finding a target letter among distractors). In the high load condition, finding the target was difficult because many similar letters were present. In the low load condition, finding the target was easy—the background was homogeneous. The key finding: In high load conditions, participants showed reduced processing of irrelevant stimuli. In low load conditions, irrelevant stimuli were processed more fully and caused interference. This elegantly explains why you might not notice someone waving at you across a crowded, busy street (high load: many visual elements competing for processing) but you would notice the same person in a quiet, empty parking lot (low load: little else demands processing capacity). Attentional Set and Visual Search The interaction between perceptual load and attention isn't simple, however. Theeuwes, Kramer, and Belopolsky (2004) showed that an attentional set—your current goals and what you're searching for—interacts with perceptual load. If you're specifically searching for a particular color or feature, you may automatically orient toward that feature even when the current perceptual load is low, because your goal activates the relevant attentional systems. Perceptual Load and Consciousness Cartwright-Finch and Lavie (2007) extended perceptual load theory to explain inattentional blindness—the failure to notice unexpected objects when attention is directed elsewhere. They found that higher perceptual load increases inattentional blindness. When people are engaged in a high-load task, they're more likely to completely miss an unexpected object that appears (like a woman in a gorilla suit walking through a basketball game—a famous demonstration of inattentional blindness). This makes intuitive sense from the perceptual load perspective: high load exhausts available capacity, leaving nothing for unexpected stimuli. But with low load, spare capacity can detect the unexpected object. <extrainfo> Folk (2010) described divided attention as a dynamic function of perceptual load and task demands. This perspective emphasizes that whether you notice something unexpected depends on the complex interaction between how much your current task demands and how salient the unexpected stimulus is. </extrainfo> Multitasking, Bottlenecks, and Cognitive Limits Up to this point, we've mostly discussed attention to a single focus of interest. But real life often demands managing multiple tasks simultaneously. Understanding how—and why—multitasking fails is crucial. Dual-Task Interference When people try to perform two tasks at once, their performance on both typically degrades. This dual-task interference reveals fundamental limits in how we manage competing attentional demands. Salvucci and Taatgen (2008) proposed threaded cognition as an integrated theory of concurrent multitasking. Their model describes how multiple "task threads" can run in parallel, but they share limited cognitive resources. Tasks with high attentional demands compete heavily; tasks with lower demands (that are more automatic) can coexist more easily. This explains why you might successfully drive while having a casual conversation, but immediately stop talking when traffic gets heavy—the driving task suddenly increases in load and demands more resources from the shared pool. Bottlenecks and Resource Allocation The question of where exactly limitations emerge has refined our understanding. Trautwein, Singer, and Kanske (2016) found that stimulus-driven reorienting creates a bottleneck in the anterior insula. When unexpected stimuli suddenly grab attention, processing goes through a neural bottleneck that impairs your ability to maintain executive control. This explains why you lose your place in conversation if something unexpected suddenly happens—not because you lack general resources, but because attention is involuntarily pulled and must go through a bottleneck to reorient. Sensory modality also matters: Wahn and König (2017) demonstrated that attentional resource allocation across sensory modalities depends on task-specific demands. If you need to integrate information from multiple senses (like watching someone speak while listening), you must allocate resources differently than if the tasks are unrelated. <extrainfo> The research on bottlenecks suggests that attention limitations aren't just about a limited resource pool—they're also about specific neural systems and structural limitations in how information flows through the brain. Understanding both the resource and bottleneck perspectives gives a more complete picture. </extrainfo> Summary: A Unified Understanding The models and theories you've learned work together to create a comprehensive understanding of attention: Selection models (early filter, attenuation, late selection) explain when filtering occurs Spatial models (spotlight, zoom-lens) explain where attention goes and how it behaves Feature Integration Theory explains how attention binds features into objects The Three-Network Model reveals the brain systems that implement attention Resource and load theories explain why attention has limits and how those limits depend on task demands Multitasking and bottleneck research shows how limitations emerge when managing competing demands Rather than contradicting each other, these models address different aspects of the same phenomenon. Attention is simultaneously a filter, a spotlight, a binding mechanism, a neural system, and a limited resource—depending on what aspect you're examining.
Flashcards
What does the early filter model propose happens to sensory input before semantic processing?
It is held in a pre-attentive store and filtered based on physical characteristics.
How does the attenuation model explain the processing of unattended inputs?
Inputs are weakened rather than blocked, allowing salient information to be processed.
At what point does selection occur according to the late selection model?
At the point of response or conscious awareness.
How does the spotlight model describe the processing of peripheral information?
Peripheral information receives coarser processing than information within the movable beam.
What is the trade-off when the attended region expands in the zoom-lens model?
Resources are distributed wider, which reduces processing efficiency.
According to feature integration theory, what role does focused attention play in object identification?
It binds basic visual features together.
What are the two stages of processing proposed by attentional engagement theory?
An initial parallel stage creating structural descriptions of objects A selective stage that engages specific representations
Which neurotransmitter modulates the alerting network in the right frontal and parietal regions?
Norepinephrine
Which brain structures are used by the orienting network to direct attention toward stimuli?
Frontal eye fields Parietal saliency maps
Which brain regions are involved in the executive attention network's role of resolving response conflict?
Anterior cingulate cortex Dorsolateral prefrontal cortex
How did Daniel Kahneman describe the nature of attentional resources in his single-pool model?
As a limited central pool that can be flexibly allocated across tasks.
When is interference between tasks strongest according to modality-specific resource models?
When the tasks share the same sensory modality.
According to Lavie (1995), what factor determines whether selection occurs early or late?
The amount of perceptual load.
How does high perceptual load affect the processing of irrelevant stimuli?
It reduces the processing of irrelevant stimuli.
What is the relationship between perceptual load and inattentional blindness according to Cartwright-Finch and Lavie (2007)?
Higher perceptual load increases the likelihood of inattentional blindness.

Quiz

According to Lavie's perceptual load theory, what is the effect of high perceptual load on attentional selection?
1 of 12
Key Concepts
Attention Models
Early filter model
Attenuation model
Late selection model
Perceptual load theory
Resource theories of attention
Attention Mechanisms
Spotlight model
Zoom‑lens model
Feature integration theory
Three‑network model of attention
Threaded cognition