Skip to main content
power-incentives-behavior

Incentive Quotes: What They Hide About How Behavior Actually Changes

Popular quotes about incentives sound profound but obscure the mechanisms that actually drive behavior. They frame alignment as motivation when it's about power, ignore gaming dynamics, and mistake symptoms for causes.

Incentive Quotes: What They Hide About How Behavior Actually Changes

Incentive quotes get repeated in boardrooms and management books because they sound like compressed wisdom. “What gets measured gets managed.” “You get what you incentivize.” “Show me the incentive and I’ll show you the outcome.”

The problem is that memorable phrasing often hides the actual mechanisms. These quotes frame incentives as alignment tools when they’re usually displacement mechanisms. They imply predictable cause-and-effect when the reality is gaming, erosion, and unintended consequences. They compress complex system dynamics into phrases too simple to be useful.

Understanding what incentive quotes actually reveal requires examining what they systematically obscure.

”Show Me the Incentive and I’ll Show You the Outcome”

Charlie Munger’s formulation gets cited as fundamental truth about human behavior. The claim is straightforward: people respond to incentives predictably. Design the right incentive structure and you get the desired behavior.

This works in narrow, well-defined contexts. Put a bounty on rats and people will bring you rats. Pay per line of code and you’ll get more lines of code. Reward sales volume and salespeople will maximize volume.

The problem surfaces when the incentive metric diverges from the actual goal. The bounty on rats leads to rat farming. Lines of code incentivize verbose garbage instead of elegant solutions. Sales volume rewards pushing products customers don’t need to buyers who can’t afford them.

Munger’s quote assumes the incentive and the outcome are the same thing. They’re not. The incentive is a proxy metric. The outcome is what you actually want. The space between proxy and goal is where gaming occurs.

When someone designs an incentive system, they’re making a bet that the proxy metric correlates perfectly with the unmeasured goal. This bet almost always fails. People optimize for what gets measured because measurement determines consequences. The unmeasured goals get ignored because they carry no weight in the evaluation system.

The quote reveals that incentives drive behavior. What it hides is that the behavior you get is often not the behavior you wanted. It’s the behavior that maximizes the metric while minimizing effort toward the actual objective.

”What Gets Measured Gets Managed”

Peter Drucker’s phrase shows up in every discussion about metrics and performance management. The implied mechanism is that measurement creates visibility, visibility enables management, and management improves outcomes.

The actual mechanism is different. What gets measured gets gamed. What gets managed is the metric, not the underlying process. Improvement in the metric often masks degradation in the thing you actually care about.

Call centers measure average handle time. The metric is supposed to correlate with efficiency. What actually happens: representatives rush customers off calls regardless of whether the issue is resolved. Handle time improves. Customer satisfaction collapses. The metric diverges from the goal but continues to be optimized because it’s what gets measured.

Hospitals measure patient discharge timing. The goal is efficient throughput. The measured behavior is discharging patients before they’re ready to hit timing targets. Readmission rates increase. The metric improves while patient outcomes degrade.

Software teams measure velocity. The goal is sustainable delivery of value. The measured behavior is inflating story points, cutting quality practices, and deferring technical debt. Velocity increases. Code quality collapses. The system optimizes the metric at the expense of the actual objective.

The quote persists because measurement feels like progress. If you can quantify something, you must be managing it better. This confuses the proxy for the goal and assumes optimization can’t make things worse.

What Drucker’s formulation obscures is that managing the metric is not the same as managing the system. Metrics create target fixation. They redirect effort from complex, unmeasured objectives toward simple, gameable proxies. The management you get is often counterproductive to the outcomes you want.

”You Get What You Incentivize”

This shows up as universal truth in economics and management theory. Design the right incentives and you align behavior with goals. Get the incentives wrong and behavior diverges.

The framing treats incentives as alignment mechanisms. The reality is that incentives are displacement mechanisms. They don’t add motivation to existing work. They replace one kind of motivation with another.

Before explicit incentives, someone might balance multiple considerations: professional standards, customer needs, long-term sustainability, ethical constraints, and system health. They make judgment calls because the work demands it.

After explicit incentives, the same person optimizes for the metric that determines their evaluation. Everything that doesn’t affect the incentive becomes optional. Professional standards get shortcuts. Customer needs get ignored unless they affect measured outcomes. Long-term considerations disappear because the incentive operates on a quarterly cycle. Ethics become constraints to work around rather than guides for decision-making.

The incentive doesn’t align behavior with organizational goals. It replaces complex judgment with single-metric optimization. The behavior you get is narrower, more brittle, and more prone to gaming than the behavior you had before introducing the incentive.

This is why adding incentives to intrinsically motivated work often makes performance worse. Teachers who care about student learning start teaching to the test when test scores determine their evaluation. Researchers who pursue interesting questions start chasing citation metrics when citations determine funding. Developers who build quality systems start maximizing feature count when features determine promotion.

The quote assumes incentives add motivation. What they actually do is crowd out everything except metric optimization. You get what you incentivize, but you also lose everything you didn’t.

”Incentives Are Superpowers”

This framing shows up in startup advice and management consulting. The claim is that properly designed incentives can achieve what other interventions can’t. Align incentives and everything else follows.

The appeal is obvious. If incentives are superpowers, then organizational problems are just incentive design problems. No need for culture work, trust building, or systemic changes. Just fix the incentives and behavior fixes itself.

This breaks in several ways.

First, incentives work differently at different organizational layers. What motivates an individual contributor often demotivates a manager. What aligns a team creates zero-sum competition between teams. What optimizes a department suboptimizes the organization. There is no single incentive structure that aligns all levels simultaneously.

Second, incentive systems accumulate over time. Base salary, performance bonuses, equity grants, promotion criteria, peer recognition, project assignments, and informal status all create different, often contradictory incentives. The person experiencing these incentives doesn’t optimize for one. They navigate the conflicting pressures by satisficing: doing the minimum necessary across all incentive systems to avoid punishment while maximizing personal benefit.

Third, incentives only work when behavior is observable. Much of the valuable work in organizations is invisible. Preventing problems before they occur. Sharing knowledge informally. Mentoring colleagues. Maintaining code quality. Building relationships. These activities create long-term value but don’t show up in metrics. Incentive systems that can’t measure these behaviors effectively penalize them by opportunity cost. Time spent on unmeasured activities is time not spent optimizing measured ones.

The superpower framing hides these complications. It makes incentive design sound like a precision tool when it’s usually a crude instrument that creates more problems than it solves.

”If You Want to Change Behavior, Change the Incentives”

This appears in change management frameworks as foundational principle. Resistance to change gets diagnosed as incentive misalignment. The solution is to restructure rewards so the new behavior becomes rational.

The mechanism this assumes: people calculate costs and benefits, then choose the option that maximizes their payoff. Change the calculation and you change the choice.

Real behavioral change is messier. People optimize for multiple objectives simultaneously. They have limited information about actual costs and benefits. They operate under uncertainty about whether new incentives will persist or be reversed. They face social pressure from peers whose incentives may not have changed. They carry habits from previous incentive regimes that take time to unlearn.

Changing incentives also creates transitional losers. The person whose expertise was valuable under the old system loses status under the new one. The manager whose authority came from control loses power when control is distributed. The long-tenured employee who climbed the hierarchy loses position when the hierarchy flattens.

These losses are not symmetrical. The person who loses can identify their loss immediately and personally. The diffuse future benefits of the new system are abstract and uncertain. Rational resistance occurs when personal loss is concrete but organizational benefit is speculative.

The quote frames incentive change as sufficient. The reality is that incentive changes often require removing the people most threatened by them, creating parallel reward structures during transition, or forcing compliance through monitoring and enforcement until new behaviors become habitual.

Changing behavior requires changing incentives. But changing incentives alone rarely changes behavior in the intended direction.

”Alignment Is Everything”

This shows up in organizational design and strategy discussions. The claim is that performance problems stem from misalignment. Get incentives, goals, and metrics aligned and execution becomes straightforward.

The problem is that perfect alignment is only possible in trivially simple systems. The moment you have multiple teams, multiple time horizons, multiple stakeholders, or multiple objectives, alignment breaks down.

Sales wants shorter contract cycles to hit quarterly targets. Engineering wants longer development timelines to build quality. Finance wants cost reduction. Product wants feature expansion. Legal wants risk mitigation. Every function has locally rational incentives that conflict with other functions.

Alignment frameworks try to resolve this by creating shared incentives at the top level. Tie everyone’s compensation to company performance. Make cross-functional goals mandatory. Use OKRs to cascade objectives.

What actually happens is that people game the shared metrics while preserving their local optimization. Sales books revenue that engineering can’t deliver. Engineering commits to timelines they can’t meet. Teams coordinate on metric performance while actual work remains misaligned.

The alignment fantasy assumes you can design incentive structures that eliminate conflict. The reality is that organizational complexity creates inherent tensions. Multiple objectives are genuinely incompatible. Someone’s goals must be subordinated. Alignment is a political outcome, not a design outcome. It requires power to enforce priority hierarchies and the willingness to make some functions unhappy.

Treating alignment as achievable through incentive design obscures the need for authority, arbitration, and acknowledgment that trade-offs exist.

”Pay for Performance”

This shows up as both management philosophy and compensation structure. Tie rewards directly to measurable output. High performers earn more. Low performers earn less. Effort and outcome correlate naturally.

The assumption is that performance can be measured accurately, that measurement captures what matters, and that differential pay motivates better work.

None of these hold reliably.

Performance measurement in knowledge work is usually subjective or proxy-based. Who gets credit for collaborative work? How do you compare the developer who ships features to the one who prevents technical debt? What counts as high performance when value takes years to materialize?

Even when measurement is possible, it often captures the wrong thing. Sales revenue is measurable but doesn’t distinguish between sustainable client relationships and extractive short-term deals. Code commits are measurable but reward activity over thoughtfulness. Hours worked are measurable but penalize efficiency.

Differential pay based on measured performance also creates predictable distortions. People shift effort from hard-to-measure valuable work to easy-to-measure visible work. They compete with colleagues instead of collaborating. They take credit for others’ contributions. They game metrics rather than improving actual performance.

Pay for performance works when tasks are individual, output is measurable, quality is verifiable, and gaming is preventable. These conditions rarely coexist in complex organizations.

The phrase persists because it sounds fair. High performers should earn more. The problem is that “performance” as measured by incentive systems often diverges from actual value creation, and the act of measuring creates behaviors that undermine the goals the system was designed to achieve.

What Incentive Quotes Actually Reveal

These quotes persist because they compress uncomfortable truths into phrases that sound like solutions.

“Show me the incentive” acknowledges that people respond to consequences. What it hides is that they respond by gaming metrics, not by achieving goals.

“What gets measured gets managed” acknowledges that visibility drives attention. What it hides is that managing metrics is not the same as managing systems.

“You get what you incentivize” acknowledges that rewards shape behavior. What it hides is that they shape it by crowding out everything else.

“Change the incentives” acknowledges that behavior responds to consequences. What it hides is that changing consequences creates winners and losers, and losers resist rationally.

“Alignment is everything” acknowledges that coordination matters. What it hides is that alignment is a political outcome, not a design achievement.

“Pay for performance” acknowledges that differential effort deserves differential reward. What it hides is that measuring performance usually measures the wrong thing.

The quotes reveal that incentives matter. What they obscure is how they matter, what they cost, and what they destroy in the process of optimizing behavior.

How Incentives Actually Work in Practice

Incentives create focus by destroying peripheral vision. They drive metric optimization by crowding out unmeasured objectives. They align measurable behavior by creating conflict in unmeasured domains.

When incentive systems fail, it’s not because the design was slightly wrong. It’s because the fundamental assumption is flawed. The assumption is that complex, multi-objective work can be reduced to measurable proxies without losing essential information. This assumption fails in nearly every knowledge work context.

The alternative is not to abandon incentives. It’s to treat them as the crude, distorting, game-prone mechanisms they are and design accordingly.

Use incentives sparingly. The fewer metrics you optimize for, the less gaming occurs and the more room remains for judgment.

Measure the right level. Individual incentives create zero-sum competition. Team incentives create free-riding. Organizational incentives create diffusion of responsibility. None of these is correct universally. The right level depends on where coordination is most critical and where individual judgment matters most.

Change incentives slowly. Rapid shifts create chaos, resentment, and transitional gaming as people optimize for multiple regimes simultaneously.

Acknowledge what gets sacrificed. Every incentive system makes trade-offs. The unmeasured work becomes deprioritized. Make this explicit rather than pretending the incentive aligns everything.

Monitor for gaming. Assume people will find loopholes. When metrics improve but outcomes don’t, the system is being gamed. Adjust or abandon the metric before the gaming becomes institutionalized.

Pair incentives with constraints. If you incentivize speed, enforce quality thresholds. If you incentivize sales, cap tactics that damage customer relationships. Constraints prevent the optimization from destroying adjacent values.

Treat incentive design as ongoing adjustment, not one-time solution. Systems drift. Gaming evolves. What worked initially stops working. Incentive structures require constant recalibration based on how they’re actually being exploited.

What the Quotes Obscure

Incentive quotes are popular because they make organizational design sound simple. Find the right incentive and behavior follows. This is comforting but false.

Behavior follows from incentives, but not in the direction you expect. It follows toward metric optimization, not goal achievement. It follows by gaming the measurement system, not by improving actual performance. It follows by crowding out unmeasured work, not by aligning with organizational purpose.

The quotes compress these dynamics into phrases that sound like solutions. They’re not solutions. They’re reminders that incentives matter and warnings that they rarely work as intended.

When someone quotes an incentive maxim, ask what it’s hiding. What gaming does it ignore? What trade-offs does it obscure? What complexity does it flatten into false simplicity?

The quotes reveal that people respond to consequences. The reality requires examining which consequences, at what cost, and whether the behavioral change makes things better or just makes the metrics look good.