Your Daily Eko

You listened. You learned. You forgot. But don’t worry, Eko remembered for you. Here are your daily top insights to keep you sharp.

🧠 Insights You Won’t Forget

Today's insights are inspired by How Complex System Fail by Richard Cook

  1. Failure in complex systems is systemic, not isolated

    Catastrophic failure occurs not due to a single error but from a confluence of small, often latent failures. These hidden issues evolve with time and system changes, making failure an emergent event, not an isolated mishap.

  2. Root cause analysis is a flawed framework

    Post-accident investigations often seek a “root cause,” but this oversimplifies the true multifactorial nature of systemic failures. These efforts usually reflect social desires for accountability, not technical reality.

  3. Human operators balance production and protection

    Workers are both producers and safeguards. They are constantly managing trade-offs between efficiency and safety, and their real-time decisions often prevent unseen failures from surfacing.

  4. Change is a double-edged sword

    Introducing new technology or processes often eliminates known, frequent failures but inadvertently creates new, rare, and potentially catastrophic ones. These are harder to foresee and can go unrecognized until too late.

  5. Safety is an emergent, dynamic system property

    Safety isn’t a feature of components or individuals but arises from the continuous, adaptive actions of people operating within a complex system. It evolves and must be cultivated actively over time.

  6. Hindsight bias clouds judgment post-accident

    Knowledge of an outcome makes it seem obvious in retrospect. This distorts evaluations of past decisions and overestimates what should have been apparent at the time.

  7. Successful operations depend on practiced adaptability

    Practitioners constantly create safety through small, often intuitive adaptations, including reallocating resources, rerouting tasks, or improvising based on real-time conditions.

  8. Failure-free operations require experience with failure

    Operators need to encounter and learn from failures to recognize system boundaries. This calibration improves their ability to keep operations within safe margins.

Recall from last week
  1. “Vital Few” over “Trivial Many”

    Greenoaks doesn’t pursue breadth. The firm operates with a clear filter for what’s worth its attention, dramatically reducing noise and increasing signal. This is supported by their ability to deploy conviction capital (up to $500M) quickly without committees—seen in Carvana, TripActions, and Rippling deals.

  2. Avoid “Momentum Thinking” in Public Markets

    Greenoaks is comfortable buying into high volatility and broken narratives if they understand the business deeply. Carvana was a prime example—invested heavily as stock fell from $300 to $5, betting on misunderstood operational turnaround.

💡 Eko Worth Remembering

“All practitioner actions are gambles… successful outcomes are also the result of gambles.”

Richard Cook

⚡ Active Recall – Test Yourself 

Question: Why is “root cause analysis” considered an inadequate approach for understanding failures in complex systems, and what alternative understanding does Cook suggest?

(Answer at the bottom)

🛤️ Off the Record

Happy Monday & Memorial Day, hope you are enjoying the long weekend!

Reading How Complex Systems Fail had me thinking about a different kind of system, career trajectories. Specifically, the idea of failing forward. We all know those people who seem to swing for the fences, miss, and somehow still level up. The product doesn’t ship, the venture falls short, the deal doesn’t land, but they walk away with a better title, a bigger paycheck, or a warm intro to the next opportunity. It almost feels like failure is part of the plan.

And in a way, it is.

Like complex systems, successful careers often carry latent fragilities, missed targets, near-burnouts, internal chaos. But when designed thoughtfully, they also have built-in redundancies: a strong network, social capital, narrative skill, or simply the luck of compounding experience. The smartest operators I know don’t just hedge failure, they harness it. They set up systems where even a public flop leaves them with asymmetric upside.

Here’s the kicker: if you know your “failure” still moves you forward, you’re free to take bigger bets. That’s a powerful frame, not reckless risk-taking, but informed, resilient exploration.

There’s also something deeper here: people who fail early and often tend to build a more intuitive understanding of how the system breaks. They learn not just what works, but why things fall apart. Over time, that makes them not just experienced, but wise.

This brings up an interesting tension for anyone building teams: do you hire the polished expert or the sharp learner who’s failed fast and thoughtfully? Most companies default to the former. But a future-proof workforce needs both. We don’t get mastery without making space for messiness.

So yeah, maybe the game isn’t to avoid failure. It’s to design your life so that when it happens, it compounds.

Eko’s Top Pods

Reply with an episode suggestion. If added, you’ll get a shoutout from Eko!

Answer:

Because failures arise from multiple interacting factors, no single “root cause” can be isolated. Cook suggests embracing a systems view that acknowledges the interplay of latent conditions, adaptations, and operational dynamics rather than simplistic cause-effect reasoning.

Enjoyed these insights? Forward this newsletter to a friend. Let’s grow smarter, together.

Reply

or to participate.