Duri Chitayat
The Anatomy of Strategy

The Anatomy of Strategy

Feb 2025

This post is about "strategy" as a process, an insight, a policy, and a set of actions.

Over my career, I’ve never started a project in the perfect position. If that kind of situation even exists, I don't think it would be very interesting. There are always obstacles and uncertainty. The situation asks us a question: what do you want to do?

Unfortunately, I rarely have a good solution upfront. Too many obstacles. Too many details.

Early in my career, not knowing what to do bothered me. I felt like an imposter. That feeling would prompt me to jump to a solutions. A few times I did this, the quick decisive actions actually recieved praise. It was only later that I realized I'd overlooked significantly easier ways to achieve better outcomes.

I don't think I'm alone in this situation. Indeed I believe many really good people fall into this trap - perhaps especially those with great experience or insticts. We've trained ourselves to have good insticts, to recognize patterns and apply heuristics. No strategy. Or at least, no contextual strategy. Just good practice and experience.

There's nothing wrong with this in general. Its important to note, strategy is not a necessary ingredient for success. Hard work, talent, and a bit of luck go a long way. But strategy is what allows to succeed when good practice is not enough, when the world is changing, when we need to sustain success, and scale. These are the situations I want to focus on here.

The Art of Not Knowing

Every good Strategy starts with some uncommon insight.

Richard Rumelt explains:

The first step of making strategy real is figuring out the big ‘aha’ to gain sustainable competitive advantage—in other words, a significant, meaningful insight about how to win.

  • Richard P. Rumelt, Good Strategy Bad Strategy: The Difference and Why It Matters

And as Peter Thiel puts it:

The best entrepreneurs know this: every great business is built around a secret that’s hidden from the outside. A great company is a conspiracy to change the world; when you share your secret, the recipient becomes a fellow conspirator.

  • Peter Thiel, Zero to One: Notes on Startups, or How to Build the Future

How do we achieve this insight? Well there are many ideas about this, and of course taste and experience play a part. There are a few Design Thinking and advanced sense-making techniquues that I'm personally attracted to. I'll link to these below. But please understand my point here is to suggest that whatever we do, it should be a deliberate and rigorous process combining:

  1. empathy development
  2. qualitative research
  3. quantitative analysis

I think this forces us to confront reality from a variety of angles and makes it harder to fool ourselves or anchor on faulty assumptions.

Secondly, an express object of this process needs to be pressing on deeply held assumptions. If we don't, we will become sheep-dipped in common knowledge, and what we really want is uncommon insight.

Techniques

Here are some techniques I like:

  • Shadowing (Ethnographic Research): This is a process popular in HCD and Design Thinking circles in which you quietly follow a target user through real tasks in their natural context, without prompting, judging, or trying to solve anything—just noticing behaviors, work-arounds, artifacts, and environment. Avoid interrupting, minimize the Hawthorne effect (blend in, be quiet), and separate what happened (facts) from what you think it means (later synthesis).
  • Current State Value Stream Mapping: this is a lean technique to visualize flow of value through a system. You diagram the steps (work + waits) from customer request to delivered value, annotate with real data (times, volumes, error rates). It is often accompanied with cross-team workshops, using the visualization to encourage shared understanding and surface "kaizen" moments. These books are a great intro: Learning to See, Value Stream Mapping: How to Visualize Work and Align Leadership for Organizational Transformation.
  • Contextual Inquiry: This is a user research method where you observe and interview users in their natural environment while they perform real tasks. The goal is to understand their workflows, pain points, and needs in context. It combines observation with open-ended questioning to gather rich qualitative data.
  • Service Safari: This is a method where you experience a service as a customer would, to identify pain points and opportunities for improvement. You document your journey, noting interactions, emotions, and any obstacles encountered. Where possible, this really should be part of your organization's DNA, also known as "Dogfooding", when applied systematically.
  • Jobs-To-Be-Done “Switch” Interviews: a structured interview used to reconstruct the timeline of real story of how/why a person switched from one service to another. It maps the "forces of progress" push of the current situation and pulling to the new solution, in addition to the inertia of habits, and anxieties about change.
  • Wardley Mapping: an advanced mapping technique to visualize the landscape of a business or system, showing components, their evolution, and how they interact. It helps identify strategic opportunities and threats by understanding the context and dynamics of the environment. This is a good intro: Simon Wardley: an introduction to Wardley Mapping.
  • Service Design Blueprinting: a detailed front-to-back mapping of a service experience, showing customer actions, frontstage interactions, backstage processes, and support systems. It helps identify pain points and opportunities for improvement across the journey.

Diagnosis

Okay. So now we've done our research, gathered data, and surfaced insights. Now what?

Now its time to narrow the problem space.

We need to develop a crisp explanation of the challenge that simplifies messy reality by spotlighting the few factors that actually drive outcomes. A sound diagnosis frames “what’s going on,” names the crux (the hardest, most consequential part to solve), and sets up everything that follows: guiding policy and coherent actions.

A good diagnosis:

  • Cuts through noise: selects what’s critical (drivers, constraints, leverage points) so people stop chasing everything.
  • Explains causality: why the challenge exists now and which forces keep it in place.
  • Points the way (without being the way): it doesn’t prescribe actions, but it limits the field of sensible moves.
  • Focuses on the crux: the bottleneck where concentrated effort can unlock a cascade of wins.

Here's an example:

Diagnosis: Despite heavy top-funnel spend, due to high switching costs and weak differentiation our challenge is stagnant adoption; at the crux of this challenge is the painful, 10-day data migration that kills 70% of trials.

That diagnosis doesn’t name a solution, but it immediately narrows smart options.

How

This is where first principles thinking comes in. We need to break down the problem to its most basic elements and reassemble it from the ground up.

Think of First Principles as the bridge between your raw discovery work and Rumelt’s diagnosis. You’re using what you observed (empathy + qual + quant) not to stack opinions, but to strip the situation to its non-negotiables and rebuild a causal explanation of “what’s really going on.”

After seperating facts from opinions, Re-express the problem in elements that don’t depend on fashion or preference, for example:

Individual cognition & motivation

  • Attention budget (scarcity): Limited working memory → shortcuts, missed steps, error spikes under load.
  • Loss aversion / risk asymmetry: Avoiding losses outweighs equal gains → stickiness, slow switching.
  • Status quo inertia / habit loops: Existing routines resist change unless friction drops or payoff is immediate.
  • Trust formation/erosion: Consistency + transparency → permission to act; breaches create long tail drag. See The Five Dysfunctions of a Team.
  • Autonomy–Mastery–Purpose: Motivation rises with control, growth, meaning; micromanage these and output falls. See Drive by Daniel Pink.
  • Cognitive load & context switching: Multitasking taxes throughput; WIP limits raise quality.
  • Satisficing vs. optimizing: Most users pick “good enough” under uncertainty/time pressure.
  • Temporal discounting: Near-term pain beats long-term gain → shorten time-to-value.
  • Social proof & norms: Others’ behavior frames “acceptable” choices.
  • Recognition as fuel: Visible progress and credit compound engagement.

Team & social dynamics

  • Psychological safety: Speak-up culture predicts error detection and learning speed. See Coaching Agile Teams.
  • Role clarity / interfaces: Ambiguity multiplies handoff failures; crisp RACI/API reduces rework.
  • Transactive memory: “Who knows what” maps enable speed; silos kill it.
  • Coordination cost: Communication paths grow nonlinearly with team size (Brooks’ Law). See The Myth by Frederick P. Brooks Jr..
  • Power distance: High distance suppresses weak signals from edges.
  • Informal networks (shadow org): Real influence ≠ org chart; follow the connectors. Leading Change by John P. Kotter
  • Cadence & rituals: Regular check-ins create predictability and surface issues early.
  • Conflict hygiene: Task conflict good; relationship conflict toxic—needs norms. See Radical Candor.
  • Onboarding half-life: Newcomer time-to-independence predicts team capacity.
  • Bus factor / key-person risk: Concentrated knowledge is a systemic fragility.

Structure & interfaces (org design primitives)

  • Conway’s Law: System architecture mirrors communication structure; change one to change the other.
  • Modularity & decoupling: Clear boundaries reduce blast radius and coordination load.
  • Span & layers: Too wide → neglect; too deep → latency.
  • Decision rights (who decides, where): Ambiguity stalls action; subsidiarity speeds learning.
  • Change cost (transaction friction): The cost to propose/ship change shapes innovation rate.
  • Slack (buffers): Zero slack = zero improvement capacity; queues explode. See The Goal.
  • Standard work & checklists: Encode the “current best way” to lower variance.
  • Escalation paths: Known, fast routes limit festering incidents.
  • Platform vs. product split: Shared services vs. customer-facing ownership changes incentives.
  • Context design (domain models): Language + boundaries define what teams see and optimize. See Team Topologies and Domain-Driven Design.

Information & decision flow

  • Common knowledge: Shared facts everyone knows that everyone knows (basis for alignment).
  • Information asymmetry: Gatekeepers distort priorities and timelines.
  • Feedback latency: Slow feedback = slow learning; shorten loops.
  • Observability: Instrumentation of work and outcomes enables truth over opinion.
  • Source of truth: Fragmented data breeds reconciliation overhead.
  • Narrative coherence: A simple story beats dashboards when coordinating many actors.

Incentives

  • Goodhart’s Law: When a metric becomes a target, it stops being a good metric—watch (look for gaming).
  • Local optimization incentives: Subteam goals can fight the company goal
  • Fairness & reciprocity: Perceived inequity tanks discretionary effort.
  • Principal–agent problem: Agents optimize their risk/effort, not always the principal’s outcome.
  • Career capital & status: People bias toward work that grows reputation/options.
  • Risk appetite & governance: Organizational tolerance defines which bets are even considered.
  • Switching costs: Time, data, political capital—these govern adoption more than “features.”

Environmental Constraints

  • Regulatory/Compliance constraints: Hard bounds that shape feasible moves.
  • Market norms & expectations: Table stakes vs. differentiators change by category.
  • Channel power dynamics: Gatekeepers (app stores, partners) tax and throttle access.
  • Temporal rhythms (seasonality, quarter ends): Cycles drive behavior windows.
  • Resource ceilings: Headcount, skills, capital—capacity is destiny without redesign.
  • Cultural context: National/industry culture shifts defaults (e.g., directness, consensus). See The Culture Map by Erin Meyer.

How to use these in a first-principles diagnosis

  • Map evidence → primitives: Tag your observations (notes, logs, incidents) to a small set of primitives that best explain behavior.
  • Link primitives causally: “Because [primitive A + primitive B] and we observe [facts], [outcome] persists.”
  • Find the crux: Sensitivity test which primitive (or interface between two) drives the biggest change if moved.
  • State the diagnosis: “Given [named primitives] and [evidence], our core challenge is [X]; the crux is [Y].”

A few useful templates

Assumption ledger row

Claim: “Mid-market trials fail because onboarding is long.” → Type: Assumption → Evidence needed: Time-to-value by cohort, drop-off step → Test: instrument setup, analyze session replays.

Causal statement

“Because [invariant/constraint] and we observe [fact], [outcome] occurs via [mechanism].”

Diagnosis sentence

“Due to [few key causes]—especially [crux]—our core challenge is [problem stated as effect].”

Summing up Diagnosis

First-Principles thinking is the engine that powers that move from data to crux. You strip away opinion and inherited heuristics, reduce the problem to primitives, and rebuild a minimal causal model. We then try to break that model with disconfirming cases and quick Fermi-style checks; what survives identifies the real levers. Sensitivity tests (“Which variable, if nudged, changes results the most?”) surface the crux.

The goal is to get to simple explanation of the challenge that pares a messy situation down to the few forces that actually drive outcomes. We get there by assembling concrete evidence, proposing rival explanations, and stress-testing each against what’s verifiably true.

We look for structural features—bottlenecks, choke points, asymmetries, constraints, and shifts in the environment—and asks, “If this explanation is right, what else should we see?”

Within the explaination is the crux: the most stubborn, consequential obstacle whose resolution unlocks outsized progress. A good diagnosis therefore rules out many tempting directions, replaces vague aspirations with causal logic, and makes the next step—the guiding policy—almost write itself.

Guiding Policy

The guiding policy gets into the "So what do we do about it?" question. It’s the approach we’ll take to tackle the diagnosed crux—clear directional rules that concentrate effort and create advantage. It isn’t a feature list or OKRs; it’s a set of choiceful constraints and priorities that narrow the universe of moves so smart actions become obvious and dumb ones become unlikely. A good guiding policy answers: Where will we focus? What will we trade off? Which mechanisms will we rely on to turn the crux in our favor?

The guiding policy is the design response built on those primitives—explicit bets about leverage points (what small changes move big outcomes) and trade-offs (what we’ll stop doing). In other words: Given the physics we discovered, here’s how we’ll use them.

Guiding Policy: A choiceful approach that concentrates resources on the crux using a specific wedge, focuses on the segments where that wedge bites, and makes trade-offs explicit—encoded as a handful of simple rules teams can execute without permission.

Here's a useful template

  1. Aim at the crux: “Because the crux is [X], we will bias resources toward [Y].”

  2. Wedge (advantage mechanism): “We’ll win by [mechanism] (e.g., reduce time-to-value, exploit distribution, raise switching costs ethically, create network effects).”

  3. Focus (where/when): “We will focus on [segments/geos/use cases/time window] where our mechanism bites hardest.”

  4. Trade-offs (no-go’s): “We will not do [A, B] even if attractive in isolation.”

  5. Simple rules (decision heuristics): “If [condition], then [default decision].” 3–7 rules that every team can apply without escalation.

Here are a few common patterns

  • Time-to-Value Wedge: “Concentrate design/engineering on reducing TTFV below a champion’s risk tolerance; de-emphasize net-new features until TTFV < 24h.”
  • Credibility Wedge: “Borrow trust from respected partners/certifications; deprioritize direct enterprise until reference flywheel spins.”
  • Distribution Wedge: “Prioritize channels with built-in demand (ecosystem app stores, SIs); say no to custom one-offs.”
  • Cost-to-Serve Wedge: “Standardize onboarding and pricing to expand gross margin, even if it slows bespoke deals.”

For example:

Example Diagnosis

High switching costs + long time-to-first-value → champions’ loss aversion kills trials; crux = 10-day onboarding.

Example Guiding Policy

  • Aim: Collapse onboarding to 24 hours for target segments.
  • Wedge: Ship “proof-without-pain” via sandbox data + auto-connectors + read-only OAuth; design for credible reversibility (easy rollback lowers loss aversion).
  • Focus: Mid-market teams with modern stacks (Snowflake/BigQuery) in two verticals where connectors exist.
  • Trade-offs: Freeze net-new feature requests that don’t cut TTFV; no bespoke imports; enterprise security questionnaires deferred until post-activation.

Simple rules

  • If a change doesn’t reduce TTFV by more than 20%, it waits.
  • If a sales ask adds custom work, require a repeatable version within two sprints—or decline.
  • If the prospect can’t self-serve a demo, we haven’t solved the crux—treat as a blocker.

Plan of Action

The plan of action is where strategy turns tactical. It draws a clean line from the diagnosis and guiding policy to a small, coherent program of moves aimed directly at the crux. Each move should be an intentional bet: why this, why now, how it exploits the wedge, and how we’ll know it’s working.

Coherence

A good plan of action has two kinds of coherence:

  1. Reinforcing: the actions should cooincide with and stabilize one another. Change requires attention and energy. And these are often our most scarse resource. By picking actions that reinforce one another, we concentrate our attention and energy which increases our likelihood of success.
  2. Trade-offs: there are many things we may need to do, but the ones we highlight should reflect the key decisions we are making. This relieves backpressure and demonstrates leadership commitment, enabling similar tradeoffs to be made more easily.

Focus

Just as important as what you do is what you stop doing so effort concentrates on the wedge you chose. This is the hardest part—discipline over temptation—but it prevents dilution. Set explicit no-go zones, WIP limits, and capacity budgets (e.g., “10% of capacity on bespoke asks until TTFV 24h”).

Tight focus also means sequencing: stage the work so early moves unlock later ones (proof → scale → optimize), rather than launching everything at once.

What the output should look like

End with 5–10 moves—no more—that each: (a) clearly tie to the guiding policy and the diagnosed crux, (b) have a named owner, timeline, and success metric, and (c) list the trade-offs that fund and protect them. Together, they should read like a single push on the same lever, not a grab-bag of initiatives.

Here’s a useful template

Move: [Name]

  • Why: [How it attacks the crux via the guiding policy]
  • Owner: [Who]
  • Timeline: [When]
  • Success Metric: [How we’ll know it’s working]
  • Trade-offs: [What we’re not doing to fund/protect this]

Example

Move #1: Launch read-only OAuth + sandbox data.

  • Why: Lowers loss aversion; makes value visible without risk.
  • Owners: Jane Doe, Engineering Leader & John Smith, Product Manager
  • Timeline: Q3 2025
  • Success Metric: TTFV p50 → <24h.
  • Trade-off: Freeze net-new features unrelated to onboarding.

Final Thoughts

Strategy is a process, an insight, a direction, and a set of deliberate choices. There are situations don't require strategy, but when we take action based on Strategy, we will have more confidence that we are taking careful aim at those efforts that will make a sustainable and scalable impact.