Skip to main content
Organizational Systems

Most Decisions Fail Before They're Made

Bad decisions are not usually wrong choices. They are the inevitable result of broken setup, wrong participants, and unanswerable questions.

Most Decisions Fail Before They're Made

The failure point of most organizational decisions is not the moment of choice. It is everything that happens before anyone is asked to choose.

By the time a decision reaches the point where someone says yes or no, the outcome is largely determined. The question was framed badly. The wrong people were in the room. Critical information was missing or ignored. The constraints were unclear. The decision was set up to fail.

Executives spend enormous energy on decision frameworks, but almost none on decision setup. They argue about who has authority to decide, but not whether the decision as posed can actually be answered. They optimize the wrong part of the process.

The Question Determines the Answer

Most decisions fail because the question is malformed.

A team asks: “Should we build feature X or feature Y?” The real question is: “What problem are we solving, and are either of these features the right solution?” But that question was never asked, so the decision proceeds on false premises.

A leader asks: “Should we invest in market A or market B?” The real question is: “Do we have the capability to succeed in either market, and if not, what would we need to build first?” But the framing assumes capability exists, so the decision will be wrong regardless of which option is chosen.

Bad framing is not a minor issue. It is the primary cause of decision failure. If the question is wrong, the answer is irrelevant.

Organizations rarely step back to interrogate the question. They assume the question is correct because someone senior posed it. They proceed to analyze options, build models, and debate trade-offs, all in service of a fundamentally broken premise.

Wrong People, Wrong Room

The second failure mode is participant selection. Decisions fail when the people making them lack the information, incentives, or authority to make them correctly.

A strategic decision is made by people who do not understand operations. An operational decision is made by people who do not understand strategy. A technical decision is made by people who have not written code in a decade. A product decision is made by people who have never spoken to a customer.

This is not incompetence. It is structural. The people with decision authority are often not the people with decision-relevant knowledge. The people with knowledge are often not in the room.

The result is predictable. The decision is based on abstraction, not reality. It sounds reasonable in a conference room. It collapses on contact with the actual system.

The problem compounds when the people in the room have misaligned incentives. They are optimizing for their own unit, their own metrics, their own career safety. They are not optimizing for the decision itself. The decision becomes a negotiation, not an analysis.

Decisions made by committee with misaligned incentives produce incoherent compromises that satisfy no one and solve nothing.

Information Asymmetry as Sabotage

Decisions fail when the people making them do not have access to the information that would change the decision.

This is not always an accident. Information is often withheld strategically. Teams present only the data that supports their preferred outcome. They bury the data that contradicts it. They claim the analysis is incomplete when it shows the wrong answer. They declare certainty when the evidence is weak.

Leaders contribute to this by signaling what answer they want before the decision is made. Teams learn to manufacture the justification for that answer. The decision process becomes theater. The outcome was determined before the analysis began.

Even when information is not deliberately hidden, it is often structurally inaccessible. The person making the decision does not know what questions to ask. The people who have the relevant information do not realize it is relevant. The information exists in the organization, but it never reaches the decision.

By the time the decision is made, it is based on a filtered, incomplete, and often misleading picture of reality.

Undefined Constraints Make Decisions Unsolvable

Decisions fail when the constraints are not clear.

A team is told to “reduce costs” but not by how much, by when, or which costs are untouchable. So they make cuts that seem reasonable in isolation but destroy critical functions.

A product team is told to “move faster” but not what they are allowed to sacrifice. So they cut corners in ways that create technical debt, security vulnerabilities, and customer churn.

A leader is told to “grow the business” but not what level of risk is acceptable, what resources are available, or what trade-offs are permitted. So they make bets that look aggressive but are actually reckless.

Undefined constraints do not give people freedom. They create ambiguity. People fill in the constraints themselves, often incorrectly, and then get blamed when the decision does not align with the unstated expectations.

Good decisions require clear constraints. Without them, the decision is unanswerable.

The Pretense of Reversibility

Decisions fail when they are treated as reversible but are not.

Leaders like to say decisions are “two-way doors.” You can try something, and if it does not work, you reverse it. This is true for some decisions. It is not true for most.

Hiring decisions are not reversible. Firing someone after six months does not undo the decision to hire them. The cost is already paid in onboarding, lost productivity, and team disruption.

Architectural decisions are not reversible. You can rewrite the system later, but the cost of doing so is often prohibitive. The decision to build it the first way is effectively permanent.

Market positioning decisions are not reversible. You can rebrand, but you cannot erase how customers perceive you. The decision to position the product one way constrains all future positioning.

Treating irreversible decisions as reversible leads to recklessness. People make large bets without adequate analysis because they believe they can undo them if they are wrong. They cannot.

Decision Debt Accumulates Silently

Some decisions are not made incorrectly. They are not made at all. The decision is deferred, or it is made implicitly by inaction.

This creates decision debt. The organization proceeds as if a decision were made, but no one actually decided. Different parts of the organization assume different answers. Conflicts emerge later when the implicit decisions collide.

Decision debt compounds. The longer a decision is deferred, the more organizational behavior ossifies around the unstated assumption. By the time someone tries to make the decision explicit, it is too late. The organization has already committed.

This is worse than making a bad decision. A bad decision can be reversed. Decision debt is structural. It is embedded in how teams operate, how systems are built, how customers are served. Unwinding it requires rearchitecting the organization.

Analysis Paralysis as Decision Corruption

Decisions fail when the analysis process is used to avoid making the decision.

More research. More models. More stakeholder input. More scenario planning. The process expands to fill the time available, and the time available is infinite because no one has set a deadline.

This is not diligence. It is avoidance. The analysis is not producing better decisions. It is producing permission to delay.

Analysis paralysis corrupts decisions in two ways. First, it allows the context to change while the analysis is ongoing, making the decision obsolete by the time it is made. Second, it signals to the organization that decisions are low priority. Teams stop waiting for decisions and start making local choices that diverge from each other.

The result is that the decision, when finally made, is both late and irrelevant.

Optimizing for Optics Over Outcomes

Decisions fail when the goal is not to make the right decision but to make a decision that looks defensible.

The decision is optimized for how it will be perceived, not for whether it will work. Leaders ask: “Can I justify this?” instead of “Will this succeed?” They build elaborate rationale for decisions that are obviously wrong because the rationale provides cover.

This is decision making as risk management. The goal is not to create value. The goal is to avoid blame. If the decision fails, the leader can point to the process, the analysis, the consultation. They followed best practices. The failure was unforeseeable.

Except it was not. The decision was broken from the start. Everyone involved knew it. But no one had the incentive to say so.

Optimizing for defensibility produces decisions that are safe to make but dangerous to implement.

False Precision in Uncertain Environments

Decisions fail when precision is mistaken for accuracy.

A financial model projects revenue to three decimal places. The model is precise. It is also wrong. The underlying assumptions are guesses, but the precision creates the illusion of certainty.

A roadmap specifies feature delivery dates six quarters out. The dates are precise. They are also fiction. The team does not know what they will learn in the next quarter, but the plan pretends they do.

A strategy document defines market share targets, cost structures, and margin expectations with granular detail. The detail is precise. It is also useless. The market will not cooperate with the plan.

Precision is seductive. It makes the decision feel rigorous. It provides the appearance of control. But in uncertain environments, precision is a lie. The decision is based on assumptions that will not hold, and the precision obscures that fact.

Better to acknowledge uncertainty explicitly and make decisions that are robust to a range of outcomes than to pretend you know the future.

Authority Without Context

Decisions fail when authority and context are separated.

The person with authority to make the decision does not have the context to make it well. The person with context does not have authority to decide. The decision is made badly or not at all.

This is structural in hierarchical organizations. Authority flows upward. Context flows downward. The two rarely meet.

Senior leaders have authority but are distant from the problem. They make decisions based on summaries, abstracts, and presentations. The nuance is lost. The decision is technically correct but operationally unworkable.

Junior employees have context but no authority. They understand the problem intimately. They know what will and will not work. But they cannot decide. They escalate, and the decision is made by someone who does not understand it.

The result is decisions that are rational in the abstract but irrational in practice.

Incentive Misalignment as Decision Poison

Decisions fail when the people making them benefit from the wrong outcome.

A vendor selection is made by the team that will manage the vendor. They choose the vendor that is easiest to work with, not the one that best serves the business.

A budget allocation is made by the leaders whose budgets are being allocated. They optimize for their own headcount, not for organizational priorities.

A product roadmap is determined by the teams building the product. They prioritize features that are interesting to build, not features that customers need.

Incentive misalignment does not require malice. It is structural. People optimize for the metrics they are measured on, the outcomes they are rewarded for, and the risks they are penalized for. If those incentives do not align with the decision’s purpose, the decision will be wrong.

Good decision processes separate decision-making authority from the incentives that would corrupt the decision.

What Good Decision Setup Looks Like

Good decisions start with good setup.

The question is clear, specific, and answerable. It has been interrogated to ensure it is the right question, not just the obvious question.

The participants have the knowledge, authority, and incentives to make the decision correctly. They are not there to represent their function. They are there to make the right call for the organization.

The information is complete, accurate, and accessible. Relevant data is surfaced. Contradictory evidence is not hidden. Uncertainty is acknowledged explicitly.

The constraints are defined. The decision-makers know what is fixed, what is flexible, and what trade-offs are acceptable.

The decision has a deadline. The analysis is time-boxed. The decision will be made when the deadline arrives, whether or not all questions are answered.

The incentives are aligned. The people making the decision benefit from getting it right, not from avoiding blame or protecting their turf.

The decision is reversible if possible, and if it is not reversible, that fact is acknowledged upfront. The level of diligence matches the irreversibility of the decision.

This is not complicated. It is just rarely done.

The Cost of Broken Setup

Organizations tolerate broken decision processes because the cost is diffuse and delayed.

A bad decision does not fail immediately. It fails over months or years. By the time the failure is obvious, the people who made the decision have moved on. The connection between the broken setup and the bad outcome is lost.

The organization does not learn. The same broken processes are used for the next decision. The failure repeats.

The cumulative cost is enormous. Resources are wasted. Opportunities are missed. Teams become cynical. They stop trusting leadership to make good decisions because leadership consistently does not.

The fix is not better decision frameworks. It is better decision setup. Get the question right. Get the right people in the room. Get the information they need. Define the constraints. Set a deadline. Align the incentives.

Most decisions fail before they are made. Fix the setup, and the decisions fix themselves.