Skip to main content
Power, Incentives & Behavior

Cognitive Science and Technology: Where Human Limits Meet System Design

Technology is designed by humans. It exploits cognitive constraints humans did not design for.

Human cognitive architecture evolved for environments that no longer exist. Modern technology exploits attention systems, memory constraints, and reward mechanisms in ways that create predictable failure modes.

Cognitive Science and Technology: Where Human Limits Meet System Design

Human cognitive architecture consists of attention mechanisms, memory systems, and reward circuits that evolved under constraints radically different from modern technological environments. Technology design that ignores these constraints produces interfaces that work against human cognitive function. Technology design that exploits these constraints produces interfaces that capture attention and drive engagement at the cost of user goals.

People interact with technology for hours daily. The interaction patterns reflect both conscious intentions and automatic cognitive responses to interface design. The automatic responses often dominate despite conscious attempts at control.

This is not a failure of willpower or discipline. It reflects mismatch between cognitive systems optimized for ancestral environments and interfaces optimized for engagement metrics. Your attention system evolved to detect novel stimuli that might indicate threats or opportunities. Modern interfaces generate artificial novelty signals continuously.

Your reward system evolved to motivate exploration of uncertain reward sources in resource-scarce environments. Modern interfaces provide variable reward schedules that trigger exploration behavior without resource constraints. Your memory system evolved to prioritize emotionally salient information. Interfaces curate emotional content to maximize salience regardless of informational value.

Understanding this mismatch requires examining what cognitive systems actually do, what environmental constraints they evolved under, and how technology design exploits the gap between ancestral function and modern environment.

What Attention Systems Actually Optimize For

Attention is selective information processing. The brain receives far more sensory input than it can fully process. Attention mechanisms filter input to prioritize processing resources toward information likely to be behaviorally relevant.

Behavioral relevance in ancestral environments meant threat detection, opportunity recognition, and social monitoring. Attention systems evolved to automatically orient toward sudden movements, novel stimuli, faces, and emotionally charged events. These automatic orientation responses happen faster than conscious deliberation.

Modern interfaces exploit these automatic responses. Notification badges create artificial novelty signals. Red indicators trigger threat-detection attention. Infinite scroll generates continuous novelty through content variation. Auto-play features eliminate the attention gap that would occur between content items.

The exploitation is not accidental. Interface designers optimize for engagement. Engagement metrics improve when interfaces capture attention more frequently and hold it longer. Optimizing these metrics means triggering automatic attention responses as reliably as possible.

A notification appears. Your attention automatically orients to it before you consciously decide whether the notification matters. The orientation response evolved because ignoring novel stimuli in ancestral environments carried survival cost. In modern environments, most notifications are not survival-relevant. The automatic response persists regardless.

This creates chronic attentional fragmentation. Each notification triggers orientation. Each orientation interrupts ongoing cognitive processing. Resuming interrupted processing requires time and effort. The cumulative cost of repeated interruption exceeds the cost of processing any individual notification.

People report feeling unable to ignore notifications despite knowing they are mostly irrelevant. This is not weakness. It reflects that attention orientation operates partially outside voluntary control. Interface design that continuously triggers automatic orientation produces attentional capture regardless of user intent.

The Cognitive Load Problem

Working memory capacity is limited. Humans can actively maintain approximately four chunks of information simultaneously. Complex tasks requiring coordination of more than four elements exceed working memory capacity unless external supports are provided.

Technology interfaces can either reduce cognitive load through well-designed information architecture or increase it through poor design. Most interfaces do both, creating pockets of reduced load surrounded by areas of excessive load.

A developer uses an integrated development environment. The interface provides syntax highlighting that reduces load for code reading by using color to chunk syntactic elements. The same interface presents multiple panels, notifications, and status indicators that exceed working memory capacity when simultaneously relevant.

The developer cannot attend to code logic, error messages, file structure, and version control status simultaneously at full depth. Attention must time-share across these information sources. Each context switch incurs load as working memory contents change.

Interface design that requires frequent context switching between information types creates continuous working memory load. The load is not visible in engagement metrics. Engagement might increase as users spend more time navigating information. The increased time reflects load-induced inefficiency rather than increased value.

Cognitive load accumulates. A single high-load interface might be manageable. Multiple applications each generating moderate load combine into excessive total load. The combination is often invisible to individual application designers optimizing their specific interface.

Users experience this as cognitive exhaustion after extended computer work even when the work itself is not intellectually demanding. The exhaustion reflects sustained high working memory load from managing multiple interface demands simultaneously. The work content is easy. The interface management is taxing.

How Variable Reward Schedules Drive Engagement

Reward systems in the brain respond more strongly to uncertain rewards than to predictable rewards. An uncertain reward generates dopamine response both in anticipation and upon receipt. A certain reward generates reduced dopamine response as the brain learns the reward is guaranteed.

This response pattern evolved to motivate exploration. In environments with uncertain resource availability, organisms that continue exploring uncertain reward sources rather than settling for guaranteed minimal rewards have fitness advantages. The dopamine response to uncertainty drives continued exploration.

Modern interfaces exploit this by implementing variable reward schedules. Social media feeds present unpredictable content. Most posts are minimally engaging. Occasionally a highly engaging post appears. The unpredictability maintains dopamine-driven checking behavior.

The checking behavior persists even when average reward value is low. If most feed refreshes provide minimal value but occasional refreshes provide high value, the variable schedule maintains engagement better than consistent moderate value would.

This explains why people repeatedly check feeds they report as mostly unsatisfying. Conscious evaluation of average feed quality indicates low value. Automatic reward system responses to variable schedule maintain checking behavior. The conscious and automatic systems optimize different objectives.

Gaming mechanics exploit this extensively. Loot boxes, random drops, and variable progression rewards all implement uncertain reward schedules. The uncertainty maintains engagement independent of whether the content provides sustained value.

The exploitation is effective because reward system responses to uncertainty operate automatically. You cannot simply decide to stop responding to uncertain rewards. The dopamine response happens before conscious evaluation. Avoiding the response requires avoiding exposure to the variable reward schedule entirely.

The Recognition Over Recall Interface Pattern

Memory systems distinguish between recognition and recall. Recognition involves determining whether a currently present stimulus was previously encountered. Recall involves retrieving information from memory without external cues. Recognition is easier than recall.

Interface design that supports recognition over recall reduces cognitive load. Menus present available options for recognition rather than requiring recall of command names. Icons provide visual recognition cues. Persistent interface elements remain visible rather than requiring memory of their location.

This design principle works when the goal is reducing user effort. It fails when the goal is skill development or information retention. Recognition-based interfaces enable use without learning. Users operate interfaces successfully without building internal models of how they work.

A user operates photo editing software through menu exploration. They can accomplish tasks by recognizing relevant menu items. They do not learn what operations are available or how operations relate to each other. Years of use produce task completion ability without operational understanding.

The recognition-based design maximizes immediate usability. It minimizes learning and transfer. Users become dependent on interface cues. Removing the interface removes ability. The user has not internalized the operational model.

This creates vendor lock-in through cognitive dependency. Users cannot easily transfer to alternative interfaces because their skill is recognition of specific interface cues rather than understanding of operational principles. The skill does not transfer.

Command-line interfaces require recall. Users must remember command names, syntax, and option flags. This creates higher initial load but forces internal model building. The internal model transfers across contexts. The user understands operations independent of specific interface presentation.

The recognition versus recall trade-off is not inherently good or bad. It is a design choice with consequences for learning, transfer, and dependency. Interfaces optimized for immediate usability through recognition sacrifice long-term learning and independence.

Why Multitasking Fails at the Cognitive Level

Multitasking is sequential task-switching, not parallel processing. When you think you are doing two cognitive tasks simultaneously, you are rapidly switching attention between them. Each switch incurs cognitive cost.

The switch cost includes time to disengage from the current task, reorient attention to the new task, retrieve relevant context from memory, and resume processing. For simple well-practiced tasks, switch costs are small. For complex tasks requiring working memory, switch costs are substantial.

Technology interfaces encourage multitasking by making task-switching trivial. Switching between applications requires minimal physical effort. The low switching cost encourages frequent switching. Frequent switching accumulates cognitive costs that exceed the physical ease of switching.

A person writes documentation while monitoring chat and email. Each incoming message triggers attention orientation. Orienting to the message interrupts documentation writing. Resuming writing requires retrieving writing context from memory. If messages arrive frequently, writing context never stabilizes in working memory.

The person experiences inability to make progress on documentation despite spending hours at the computer. The time is real. The progress is minimal. The gap reflects cumulative task-switching costs consuming available cognitive capacity.

Studies measuring multitasking efficiency consistently find that serial task completion outperforms interleaved multitasking for cognitive work. People believe they are more efficient when multitasking. Objective performance measures show they are less efficient. The subjective experience of busyness is mistaken for productivity.

Interfaces that facilitate easy task-switching enable behaviorally easy patterns that are cognitively expensive. The behavioral ease obscures the cognitive cost until performance degradation becomes severe enough to notice.

The Infinite Scroll Attention Trap

Interfaces with defined endpoints provide natural stopping points. A finite list ends. A paginated interface requires action to continue. These endpoints create opportunities for conscious evaluation of whether to continue.

Infinite scroll eliminates endpoints. Content loads continuously as the user scrolls. No action is required to continue. No natural stopping point occurs. Conscious evaluation must interrupt an ongoing behavior rather than occurring at a natural pause.

Interrupting ongoing behavior requires cognitive effort. The default is continuation. The interface design makes continuation behaviorally easier than stopping. Over time, this asymmetry produces extended engagement beyond the point where content provides value.

Users report scrolling feeds long after content becomes repetitive or uninteresting. The behavior continues because stopping requires effortful interruption while continuing requires passive scrolling. The interface exploits the effort asymmetry.

This pattern appears across contexts: video auto-play, recommended content feeds, endless pagination. Each eliminates natural endpoints and makes continuation the default path. The elimination is deliberate. Engagement metrics improve when users continue longer.

The cognitive impact is sustained attention allocation to content of decreasing marginal value. Early content in a session is often genuinely valuable or interesting. Later content is filler. The lack of endpoints prevents natural session termination at the point where marginal value drops below the opportunity cost of attention.

How Push Notifications Hijack Attention Control

Human attention control involves both voluntary direction and involuntary capture. Voluntary direction allocates attention to chosen tasks. Involuntary capture responds to salient environmental stimuli. The two systems operate in parallel.

Involuntary capture evolved as an interrupt mechanism for potentially important events. A sudden sound or movement captures attention automatically to enable rapid threat assessment. This interrupt capacity is adaptive when interrupts are rare and potentially important.

Push notifications exploit the interrupt mechanism. Each notification creates a salient stimulus designed to capture attention. The capture is automatic. By the time conscious evaluation determines the notification is unimportant, attention has already been interrupted.

Modern notification frequencies far exceed the rate of genuine important events. Dozens or hundreds of notifications per day cannot all be important. The interrupt mechanism treats them as potentially important regardless. Each triggers automatic capture and orientation.

This creates chronic attention fragmentation where sustained focus becomes nearly impossible. Any task requiring more than a few minutes of continuous attention gets interrupted multiple times by notifications. Each interruption breaks focus and requires attention re-engagement with the primary task.

Users attempting to regain control disable notifications. Applications respond by making notification disabling difficult, implementing multiple notification types requiring separate disabling, and using in-application notification badges when push notifications are disabled.

The design pattern reveals intent. If notifications served user goals, disabling them would be simple and respected. The difficulty of disabling and the multiple notification channels reveal that notifications serve application engagement goals over user attention management goals.

The Mismatch Between Reading Interfaces and Reading Cognition

Reading comprehension involves building coherent mental models from text. The process requires sustained attention, working memory for tracking referents and argument structure, and time for integration of new information with prior knowledge.

Screen-based reading interfaces often impair this process through design choices that conflict with comprehension requirements. Fixed line lengths optimized for screen width rather than reading speed, poor typography, distracting adjacent content, and hyperlink-heavy text all create comprehension obstacles.

Hyperlinks within text create continuous interruption points. Each link presents a decision: continue reading or follow the link. Making this decision repeatedly consumes cognitive resources that would otherwise support comprehension. The result is reduced comprehension of the primary text.

Variable typography and formatting on web pages creates visual processing load that does not exist in well-designed print. The brain must continuously adjust reading strategies as text presentation varies. The adjustment consumes resources unavailable for comprehension.

Adjacent dynamic content creates competing attention demands. Advertisements, suggested articles, and auto-playing videos all generate salient stimuli while you attempt to read. The stimuli trigger automatic attention orientation. Each orientation interrupts reading.

Studies comparing screen reading to print reading consistently find lower comprehension and retention for screen reading. The difference is not inherent to screens versus paper. It reflects interface design choices that prioritize engagement and advertising over reading comprehension.

E-readers with minimal interfaces approach print reading performance. The simple interface eliminates competing attention demands and provides consistent typography. The improvement demonstrates that the comprehension impairment comes from interface choices rather than medium constraints.

Why Personalization Creates Filter Bubbles

Content personalization algorithms optimize for engagement. The algorithm presents content predicted to generate interaction: clicks, likes, shares, or extended viewing time. Content that generates more interaction gets presented more frequently.

This optimization creates filter effects. Content confirming existing beliefs generates more engagement than content challenging them. Emotionally provocative content generates more engagement than neutral information. Personalized feeds progressively filter toward confirming and provocative content.

The filtering is not intentional bubble creation. It is a consequence of engagement optimization. Users engage more with content that confirms their views. The algorithm learns this pattern and amplifies it. The amplification occurs automatically through standard machine learning processes.

Users experience increasingly homogeneous content exposure. The homogeneity is not visible from inside the filter. The algorithm does not present rejected alternatives for comparison. Users see only what the algorithm predicted they would engage with.

This creates epistemological closure where evidence against your existing beliefs becomes statistically rare in your information environment. The closure is not due to active censorship. It emerges from personalization algorithms optimizing engagement without considering information diversity.

The cognitive impact is confirmation bias amplification. Humans already show confirmation bias in information seeking and evaluation. Personalized feeds amplify this by making confirming information abundant and contradicting information rare. The natural bias gets magnified by algorithmic selection.

The Cognitive Cost of Context Switching

Each application, interface, and information system has unique interaction patterns. Learning these patterns requires time and cognitive resources. Maintaining fluency across multiple systems requires keeping multiple interaction models active.

The proliferation of applications increases context-switching costs. A knowledge worker might use email, chat, project management, documentation, development environment, and video conferencing applications daily. Each has different interaction patterns, keyboard shortcuts, and information architectures.

Switching between applications requires retrieving the relevant interaction model from memory. Frequently used applications maintain active models in working memory. Less frequent applications require effortful retrieval. The retrieval takes time and displaces other working memory contents.

The cost scales with interface complexity and interaction pattern distinctiveness. Switching between two similar applications carries minimal cost. Switching between applications with conflicting interaction patterns carries high cost as you suppress one pattern and activate another.

Interface inconsistency across applications reflects competition rather than user-centered design. Each application optimizes its own patterns. No coordination occurs across the ecosystem. Users bear the cognitive cost of maintaining multiple inconsistent models.

Attempts at standardization through platform interface guidelines have limited success. Applications differentiate through novel interfaces to stand out. The differentiation creates uniqueness that aids application recognition but impairs interaction fluency as users must learn and maintain more distinct patterns.

How Undo Functionality Changes Risk Taking

Cognitive processing allocates effort to decisions based on decision reversibility. Irreversible decisions receive more careful evaluation than reversible ones. Undo functionality makes most interface actions reversible, reducing careful evaluation.

This enables faster interaction. Users can act and observe results rather than carefully planning before acting. The exploratory approach works well for learning and discovery. It creates problems when actions have hidden irreversibility.

A user deletes files, believing deletion is reversible through undo. The specific deletion method bypasses the recycle bin. The deletion is permanent. The user discovered the irreversibility only after acting.

Interface design that makes most actions reversible trains users to expect reversibility. When occasional actions are irreversible, users do not adjust their decision process appropriately. The rare irreversible action receives the same minimal evaluation as reversible actions.

This creates systematic underestimation of action consequences in digital environments. The prevalence of undo functionality reduces risk evaluation. When risks do exist, they are not adequately assessed.

The pattern extends beyond undo to autosave, version history, and backup systems. These safety mechanisms enable careless action by reducing consequences. When the mechanisms fail or are unavailable, users still operate carelessly because the behavioral pattern is established.

The Attention Residue Problem

Task-switching leaves attention residue. After switching from task A to task B, part of your attention remains allocated to task A for a period. The residue impairs task B performance until attention fully transfers.

The residue duration depends on task A characteristics. Incomplete tasks leave more residue than completed tasks. Interrupted tasks leave more residue than voluntarily concluded tasks. High-priority tasks leave more residue than low-priority tasks.

Technology interfaces create chronic attention residue through continuous task interruption. Email interrupts writing. Chat interrupts email. Notifications interrupt everything. Each interruption leaves residue that impairs the interrupted task when you return to it.

The residue is not consciously accessible. You resume the interrupted task feeling ready to work. Objective performance shows impairment. The impairment manifests as slower processing, more errors, and reduced creative insight.

Studies measuring performance after interruption find effects lasting 15-20 minutes. A single interruption impairs performance for a period much longer than the interruption itself. Multiple interruptions per hour create continuous impairment.

Interfaces that minimize interruptions or batch them reduce attention residue. Email checking on a schedule rather than on notification reduces residue by allowing tasks to complete before interruption. The same number of emails processed in batches creates less total impairment than continuous interruption.

Users optimizing productivity should optimize for residue minimization, not interrupt minimization alone. Some interruptions are necessary. Batching necessary interruptions reduces total residue relative to scattered interruptions of the same total duration.

Why Dark Patterns Exploit Cognitive Limitations

Dark patterns are interface designs that trick users into actions against their interest. The patterns exploit cognitive limitations and automatic processes rather than enabling informed choice.

Opt-out defaults exploit status quo bias. Users disproportionately accept default settings. Making privacy-invasive settings default means most users accept them despite preferring privacy if forced to choose actively.

Hidden costs exploit attention limits. The full price appears only at checkout after users have invested time in selection and comparison. The sunk time cost makes checkout more likely despite the price being higher than initially indicated.

Forced continuity exploits memory limits. Free trials automatically convert to paid subscriptions. Users must remember to cancel before trial end or face charges. The design relies on users forgetting the cancellation requirement.

Confirmshaming exploits social motivation. Decline options are phrased to create social shame: “No thanks, I don’t want to save money.” The phrasing makes declining psychologically costly even when declining is the rational choice.

These patterns are effective because they target automatic processes and cognitive limits rather than deliberative reasoning. Users consciously want privacy, accurate pricing, and control over subscriptions. The dark patterns bypass conscious preferences by exploiting attention, memory, and social motivation systems.

The prevalence of dark patterns indicates that interface designers understand cognitive science well enough to exploit it. The same understanding could enable user-serving design. The choice to exploit rather than support reflects incentive structures where engagement and revenue metrics outweigh user welfare.

The Cognitive Impact of Always-On Connectivity

Continuous connectivity eliminates boundaries between work and personal time, between focused effort and reactive responding, between present engagement and remote demands. The elimination creates chronic partial attention where full engagement becomes rare.

Partial attention means monitoring multiple channels simultaneously while fully engaging with none. You are in a conversation while monitoring email. You are writing while checking chat. You are in a meeting while reading messages.

The monitoring creates continuous cognitive load. Each monitored channel consumes attention resources. The consumption is often minimal per channel but cumulative across channels. Monitoring five channels simultaneously leaves minimal resources for deep engagement with any single task.

The load is not subjectively obvious. People report feeling normally attentive while objectively showing impaired performance on all tasks. The impairment appears in reduced comprehension, slower processing, and missed details.

Always-on connectivity also prevents cognitive recovery. Recovery from sustained cognitive effort requires periods without demands on the same cognitive systems. If connectivity means work demands can arrive at any time, true recovery never occurs.

The result is chronic cognitive depletion where baseline performance degrades over time. Sleep provides biological restoration but not cognitive restoration if you remain in monitoring mode even during personal time. Full restoration requires both biological rest and boundary creation that eliminates monitoring demands.

What Cognitive Science Suggests About Interface Design

Cognitive science identifies clear principles for interface design that supports rather than exploits human cognitive function. Implementation requires prioritizing user cognitive health over engagement metrics.

Respect attention limits. Minimize interruptions. Batch notifications. Provide notification management controls that actually work. Default to quiet rather than intrusive. Design assumes attention is user’s scarcest resource.

Reduce cognitive load. Maintain consistent interaction patterns. Minimize required context switching. Provide clear information architecture. Show only relevant information. Design assumes working memory is limited.

Support rather than exploit reward systems. Provide predictable value rather than variable reward schedules. Make quality content easily accessible rather than intermixed with filler. Design for user goals rather than engagement metrics.

Enable natural stopping points. Implement finite content views with clear endings. Require explicit action to continue. Provide progress indicators. Design assumes users want control over session duration.

Optimize for comprehension over clicks. Use clean typography. Minimize distractions during reading. Separate content from navigation. Provide distraction-free reading modes. Design assumes reading comprehension matters.

Transparent about cognitive costs. Indicate when actions are irreversible. Warn about attention fragmentation risks. Provide usage statistics. Design assumes users deserve information to make informed attention allocation decisions.

These principles are known. Their implementation is rare because engagement metrics reward their opposites. Interfaces that respect cognitive limits generate less engagement than interfaces that exploit them. Market competition drives toward exploitation unless constrained by regulation or user resistance.

The Problem With Technological Solutionism

Technology companies often propose technological solutions to problems technology created. Attention fragmentation from notifications gets addressed through notification management apps. Information overload from infinite content gets addressed through content curation algorithms. Digital distraction gets addressed through screen time tracking tools.

The pattern treats symptoms while preserving or amplifying root causes. Notification management apps help but do not address why applications generate excessive notifications. Curation algorithms help but do not address why infinite content exists. Screen time tracking helps but does not address why interfaces are designed for extended engagement.

The technological solutions often introduce new problems. Curation algorithms create filter bubbles. Screen time apps create anxiety about usage. Notification managers become another application requiring attention.

Real solutions require changing incentive structures that drive exploitative design. Engagement-based business models reward attention capture and extended usage. Alternative business models that reward user goal achievement rather than time-on-platform would change design incentives.

User-controlled algorithms where individuals set curation criteria rather than platforms optimizing for engagement would address filter bubbles. Default-off notifications where applications cannot push would address attention fragmentation. Time-limited interfaces that close after specified durations would address infinite scroll.

These solutions require giving users control at the cost of platform engagement metrics. The solutions are technically feasible. They are not implemented because they conflict with business model optimization. Technological solutionism allows platforms to appear responsive while preserving the business model driving the problems.

What Users Can Control

Individual users cannot change platform business models or interface design incentives. They can control their own technology configuration and usage patterns within the constraints platforms impose.

Aggressive notification disabling reduces involuntary attention capture. The effort required to disable notifications across applications is high. The cognitive benefit of elimination is higher. Most notifications are not time-sensitive or important. Batch processing through scheduled checking loses minimal value while eliminating continuous interruption.

Time-blocking for deep work creates interruption-free periods. This requires not just silencing notifications but closing communication applications entirely. Partial silencing often fails because visible indicators still capture attention. Complete disconnection for defined periods enables sustained focus.

Single-tasking rather than multitasking reduces context-switching costs. This requires conscious resistance to the ease of task-switching. The effort of resisting switching is smaller than the cumulative cost of switching. Serial task completion is faster than parallel task attempts despite feeling slower.

Scheduled connectivity rather than always-on creates recovery time. Defining periods where you are unreachable enables full cognitive disengagement. The scheduling must be communicated to set expectations. Without communicated boundaries, unavailability creates anxiety that prevents recovery.

Physical separation between work and personal devices helps maintain cognitive boundaries. The same device for both work and personal use means work demands intrude during personal time. Separate devices enable complete work disconnection by not carrying the work device during personal time.

These individual strategies mitigate but do not eliminate problems created by exploitative interface design. They require sustained effort fighting against interface defaults. The effort requirement is itself a cognitive cost. Platforms could eliminate the need for user mitigation through user-serving design. They choose not to because exploitation is more profitable.

The Fundamental Mismatch

Human cognitive architecture evolved for ancestral environments with different information availability, different threat profiles, and different social structures. Modern technology operates in environments with information superabundance, artificial novelty generation, and mediated social interaction.

The mismatch creates predictable failure modes where automatic cognitive responses optimized for ancestral contexts produce maladaptive behavior in technological contexts. Attention systems respond to artificial novelty as if it indicates important events. Reward systems respond to variable schedules as if they indicate valuable exploration opportunities. Social cognition responds to engagement metrics as if they indicate genuine social connection.

These responses are not errors. They are adaptive mechanisms operating outside their evolved context. The mechanisms cannot distinguish ancestral-relevant stimuli from artificial stimuli designed to exploit them. Interface designers understand this and optimize for exploitation.

Addressing the mismatch requires either changing human cognitive architecture or changing technological interfaces. Cognitive architecture changes through evolutionary timescales. Interface changes could occur immediately through design choices. The choice to preserve exploitative interfaces rather than implement user-serving alternatives reflects prioritization of engagement metrics over user cognitive health.

Users can mitigate exploitation through configuration and usage discipline. Mitigation is effortful and incomplete. Platforms could eliminate the need for mitigation through ethical design. They choose exploitation. Understanding cognitive science explains why exploitation works so reliably. It also clarifies what ethical alternatives would require.