Decision Architecture
How decisions actually get made in organisations - from hypothesis framing through measurement to execution.
The real bottleneck
Most organisations believe their constraint is execution. They’re wrong.
The real bottleneck is decision speed. Not how fast teams can build, but how quickly leaders can decide what to build, when to shift course, and which signals actually matter.
You see this everywhere. A pricing change sits in committee for three months whilst competitors adjust weekly. A product launch waits for one more data point whilst the market moves on. A strategic shift gets nodded through in principle but never translates into changed behaviour.
The gap isn’t execution capability. It’s decision architecture - the largely invisible system that governs how choices get made, tested, and embedded into practice.
Most organisations have no architecture at all. They have habits, inherited processes, and political accommodations. What passes for decision-making is often just reactions dressed up as strategy.
Building proper decision architecture changes this. It doesn’t eliminate uncertainty or guarantee perfect calls. But it creates a system where organisations learn faster than their environment changes. That’s the only durable edge.
Strategy as hypothesis
Start with how strategy actually works.
A strategy is a belief about cause and effect. If we do this, we expect that. Framed this way, strategy becomes explicit enough to test.
It’s not a mood statement (“we will be customer-centric”) or an aspiration (“we aim to lead the category”). It’s a working hypothesis that others can understand, carry into their own decisions, and refine through practice.
This is applied scientific thinking. You state what you believe, make it testable, and create the conditions to learn whether you’re right.
Most strategies fail this test. They’re documents that say obvious things (“we will compete on quality and efficiency”) or vague things (“we will drive growth through innovation”). Flip these statements and the opposite sounds equally plausible. Which means you haven’t chosen anything.
Real strategy has a rational opposite. “We will compete on lowest cost, not distinctive value” is a choice. Both opposites work - they just demand different activity systems. What fails is straddling the middle.
Once you’ve made a real choice, the next question is how you’ll know if it’s working. This is where measurement comes in.
Measurement as decision design
Data doesn’t become information until it passes through a decision process.
Most organisations confuse the two. They build dashboards, track metrics, celebrate when numbers go up. But when you ask what they’ll do differently if churn rises or NPS drops, the room goes quiet. The data exists. The decision process doesn’t.
This gap is expensive. Teams spend weeks building reports that get glanced at in meetings, then set aside. Executives say they’re “data-driven” whilst making calls based on intuition, politics, or whoever spoke last.
The fix isn’t better dashboards. It’s explicit decision design.
Start with the decision, not the data. What choice are you trying to make? What would you do if the answer were X versus Y? If you can’t answer that, the measurement is decoration.
Then work backwards. What information would actually change your action? How will you collect it? Who needs to see it? What cadence matches the decision cycle?
This forces clarity. In practice, most measurements turn out to be unnecessary. The ones that remain are sharp, actionable, and directly tied to choices someone will make.
Take S&G&A variance review. Most finance teams produce monthly reports showing spend by category - travel up 12%, contractors down 8%, software licences flat. These numbers sit in slide decks and get discussed in general terms. Nothing changes.
A proper decision design starts differently. The question isn’t “what did we spend?” It’s “which variances require action, and what will we do about them?”
So you set decision rules upfront. Variance over 15%? Finance investigates root cause and proposes correction. Between 10-15%? Department head explains in writing. Under 10%? No action needed, just monitor.
Suddenly the report has purpose. The data flows into explicit thresholds, which trigger defined actions, owned by named people. The measurement becomes information because it connects to decisions.
This discipline travels. Sales pipeline reviews. Product roadmaps. Hiring plans. Every measurement system should start with the decision it serves, then work backwards to the data required.
When you build this properly, three things happen. First, you collect less data - only what actually matters. Second, decisions get faster - the process is already designed. Third, you learn systematically - because you’ve stated what you expect and can compare it with what actually happened.
That last point is crucial. Decision design isn’t just about making better calls today. It’s about building a learning loop that sharpens your model of reality over time.
Speed as competitive weapon
Even with clear strategy and sharp measurement, most organisations move too slowly.
The bottleneck is decision speed, not execution speed. Teams can build faster than leaders can decide what to build.
This matters more now than ever. In software, deployment cycles that took months now take hours. Infrastructure spins up in minutes. The constraint has shifted from “can we build it?” to “should we build it, and how will we know?”
Organisations that move at clock-speed instead of calendar-speed win. Not because they make perfect decisions, but because they make good-enough decisions quickly, learn from them, and adjust.
The way to accelerate is to externalise decision logic. Most decisions live in people’s heads. When a question arises, someone thinks through the factors, weighs them, and makes a call. This works until that person is unavailable, or the decision needs to be made a hundred times, or new people join who don’t carry the same mental model.
Codifying the logic changes this. You make the reasoning explicit - what factors matter, how they trade off, what threshold triggers action. Others can see it, question it, apply it consistently.
Take product launch decisions. In most companies, launch readiness is a judgement call. Someone senior reviews the state of the work, asks a few questions, and says yes or no. This creates bottlenecks. Launches wait for that person’s availability. Different launches get different standards. New team members don’t know what “ready” means.
A codified rubric changes this. You list the factors that matter - core functionality complete, edge cases handled, monitoring in place, rollback tested, documentation live. For each, you set a clear threshold - not “mostly done” but “all critical paths work in staging, confirmed by QA.”
Now the decision becomes routine. Teams can self-assess. Launch speed increases. Standards stay consistent. The senior person can focus on exceptions - the 10% of launches where the rubric doesn’t quite fit - instead of personally reviewing everything.
This isn’t about removing judgement. It’s about applying judgement once, at the system level, so execution can happen at pace.
The same logic applies everywhere. Pricing approvals. Security reviews. Hiring decisions. Contract negotiations. Any decision that repeats is a candidate for codification.
The test is simple: could someone else make this decision if they had access to your reasoning? If not, you’re creating a bottleneck.
Time - not cost, not quality - is the ultimate competitive weapon. Build speed is table stakes now. The scarce resource is decision speed.
Leaders who externalise their logic create organisations that move at clock-speed, not calendar-speed.
Statistical discipline
Fast decisions are only useful if they’re good decisions. That requires discipline about which signals actually matter.
The pattern repeats across every business. A metric wobbles. Someone panics. Initiatives launch. Three weeks later the metric stabilises - it was noise, not signal. But the cost has been paid. Energy that should compound got spent reacting to variance.
Churn jumps from 4% to 6%. Two managers see the same spike.
The first slams on the brakes. She freezes hiring, commissions a costly redesign, calls an emergency review. A month later churn slides back to normal - the spike was a client merger, a one-off blip. The fixes cost more than the problem.
The second pauses. She checks whether the data source is reliable, whether this is a pattern or a one-off, whether timing fits seasonal behaviour. With no firm trend, she nudges her concern down, runs a couple of retention plays, and stays on course. Six weeks later churn stabilises - and so does her plan.
Same data, different disciplines. One treated every wobble as signal. The other weighted the evidence whilst waiting for more.
Good leaders don’t ignore wobbles. They score them proportionally before deciding how much to update their beliefs.
The Bayesian mindset helps. Start with yesterday’s belief. When new evidence arrives, update in calibrated steps, not lurches. A single data point moves you a notch. Multiple consistent signals justify a bigger shift.
It’s like a thermostat. Small adjustments based on temperature, not slamming the dial because the room felt cold for five minutes.
Most teams lack the discipline because there’s no routine for it. Urgency wins by default.
The fix is structural. After weekly metric reviews, block ten minutes for evidence-scoring before any decisions. Keep a belief log tracking key assumptions and how confidence shifts. Say out loud what you believed last week and how much you’re updating.
“I was seven out of ten confident we’d hit target. Now I’m six” creates a culture of proportionate updates instead of dramatic pivots.
Reward accurate weighting, not heroic course corrections. The person who correctly identified noise and held steady did better work than the person who launched three initiatives in response to variance.
Metrics will always wobble. The test isn’t spotting the wobble - it’s weighting it properly. Pause, score, nudge, move. That’s how you stop an organisation from twitching and help it compound instead.
Conditions for execution
Good decisions still fail if they don’t travel from boardroom to frontline.
You wrestle through a choice, align the team, leave the room with a decision. Months later, not much has changed. The call itself wasn’t wrong. The effort wasn’t lacking. Somewhere between intention and execution, momentum drained away.
This is often badged as an “execution problem.” But scratch the surface and what you find is a trust problem.
Not trust in the interpersonal sense - whether people like each other. Organisations need two distinct elements of trust working together.
The first is honesty in the room. Patrick Lencioni describes this as the willingness to be vulnerable with one another. To raise awkward truths, admit mistakes, challenge assumptions without fear.
Without it, meetings are tidy but shallow. People nod along whilst doubts and risks stay unspoken. This kind of trust gives leaders better raw material for decisions. It brings in the market signal, the operational constraint, the customer feedback. It makes the call more robust.
The second is confidence in the system. Elliott Jaques argued that genuine organisational trust comes less from personality and more from structure. The confidence that the system itself is reliable and fair.
That means accountability and authority are aligned. Managers genuinely add value to their team’s work. Decisions are made in the right place without endless rework. When those conditions are present, people don’t need to hedge or second-guess. They can commit with pace and conviction, trusting that the scaffolding will hold.
This is the trust that ensures choices made at the top actually travel. People can throw their energy into execution knowing the system won’t betray their effort.
The two parts feed one another.
Honesty relies on execution. If people trust that decisions will be carried through, they can afford to be candid. The conversation can be bolder, more nuanced, less defensive.
Execution relies on honesty. If people trust that decisions are grounded in reality, they commit more fully. Without that, they hold back, build in hidden contingencies, or wait to see if management will reverse course.
Take a simple example: pricing a new product. Interpersonal trust ensures the debate is real. Sales bring their view of the market, finance test the margin model, product weigh the competitive angle. The discussion is frank, not polite. That produces a stronger decision.
But the decision only matters if it’s acted upon. That’s where organisational trust comes in. Marketing need to be trained to communicate the value, sales equipped with new collateral, finance systems updated to invoice correctly. Without that scaffolding, the decision dissolves into frustration.
The same applies to strategy shifts. Interpersonal trust sharpens the choice. Organisational trust embeds it in the way the business runs. Both are required.
Candour without structure gives lively debate but little delivery. Structure without candour gives efficient silence - decisions carried out quickly but often wrong.
Strong organisations create trust through candour and structure. That is what turns decisions into action, and keeps momentum alive between boardroom and frontline.
Preserving knowhow
Even when decisions travel well, organisations lose critical capability when people leave.
Someone resigns and suddenly the smooth-running process judders. Reports don’t reconcile, customers chase, small fires appear. The manager insists the handover was done properly. The process map looks fine. But something’s missing.
The truth emerges slowly. What the business thought they did, and what they actually did, were different things.
These gaps reveal fragility. Beneath the tidy diagrams sit invisible seams of judgement, tacit skills, and accumulated fixes that make the system actually work.
The gap is between knowledge and knowhow. Knowledge is explicit - it sits in documents, job descriptions, training decks. Knowhow is embodied. It’s the subtle adjustment, the timing that makes the flow work, the fix you only learn by having seen the problem before.
Economist César Hidalgo makes this point in Why Information Grows: information alone doesn’t create prosperity. What matters is knowhow - embodied in networks of people who can combine and recombine skills. It’s not what’s written down, but what’s enacted.
Businesses confuse the two constantly. They think the process map is the process. It isn’t.
The shift is to stop treating knowhow as disposable and start treating it as capital.
That doesn’t mean turning everything into a manual. It means four moves.
First, spot the fault lines. If this person walked tomorrow, what would break? Those hidden dependencies are where the asset lives. Naming them openly is the first act of protection.
Second, make it visible. Not everything that sits in someone’s head is worth keeping. Alongside the sharp fixes and hard-won judgement live inefficient shortcuts, outdated rituals, and occasional madness. Seeing it clearly lets you separate what works from what should be stripped away.
Third, build light scaffolding. Once you know what matters, give it enough structure to survive a handover. Checklists, worked examples, simple principles - devices that preserve judgement without flattening it.
Fourth, make it iterative. The real value comes when knowhow isn’t just preserved but sharpened. Each cycle of doing improves the system. Today’s fix becomes tomorrow’s baseline. A scientific loop applied to operations. That’s how resilience compounds.
The most sophisticated business systems show where this leads. Toyota’s Production System began with small efforts to make tacit factory knowhow visible and improvable. Over decades, those loops of learning compounded into one of the deepest moats in modern industry.
Danaher’s Business System followed a similar arc. Each acquisition wasn’t just new financials - it was a new contributor to a learning network. Captured practices spread across the group, but they were never static. Replication accelerated improvement. Improvement deepened replication. That’s why DBS is so hard to copy. It’s not a manual, it’s a living network of knowhow.
This connects directly to decision architecture. Decisions are only as good as the knowhow that informs them. When that knowhow walks out the door, decision quality degrades. The organisation reverts to first principles every time someone new arrives.
Preserving knowhow means preserving decision capability. Not just “what we decided” but “how we think about these choices, what patterns we’ve learned, what trade-offs we’re willing to make.”
Make it visible, give it light structure, make it improvable. That’s how organisations retain the ability to make good decisions even as people change.
Learning faster than the world changes
Decision architecture isn’t a project. It’s a system.
Strategy as hypothesis. Measurement as decision design. Speed through codified logic. Statistical discipline to separate signal from noise. Trust structures that let decisions travel. Knowhow preservation that maintains capability.
These elements work together. Strategy without measurement is guesswork. Measurement without decision design is theatre. Speed without statistical discipline is whiplash. Trust without knowhow preservation is fragile. Each piece reinforces the others.
The organisations with the deepest competitive moats didn’t build them through brilliant strategy documents. They built them by creating systems that learn.
Every decision is a test. Strategy becomes hypothesis, outcomes become evidence, and the next cycle starts with a sharper model of reality.
This is what applied scientific thinking looks like at scale. Not certainty, but faster learning. Not perfect decisions, but better decisions over time. Not static playbooks, but living systems that adapt whilst retaining what works.
The long-term edge lies in learning faster than the world around you changes.
Most organisations lack this discipline. They make decisions based on intuition, politics, or whoever spoke last. They react to every wobble in the metrics. They lose critical capability when key people leave. They wonder why execution feels so hard.
The answer isn’t to work harder. It’s to build architecture.
Start with one decision that repeats. Make the strategy explicit - what are you actually betting on? Design the measurement - what information would change your action? Codify the logic - what factors matter and how do they trade off? Weight the evidence - is this signal or noise? Build the trust structures - can this decision travel from boardroom to frontline? Preserve the knowhow - what would break if the decision-maker left?
Do this once and you’ve made that decision faster and better. Do it systematically and you’ve built an organisation that learns.
That’s decision architecture. Not a framework to implement, but a discipline to practice.
The world will keep changing. Markets will shift, assumptions will crack, metrics will wobble. The question is whether your organisation learns from it, or just reacts to it.
Build the architecture. The rest follows.
This essay synthesises ideas from:
- Applied Scientific Thinking
- From Data to Information
- Hidden Bottleneck
- When Numbers Twitch
- Two Halves Of Trust
- When Someone Leaves
See also: Reading Guide for the complete collection of Field Notes.