Not Every Error Is Foolish

I recently visited a museum and came across an object that stopped me in my tracks. It was not a painting or a sculpture, but a simple banner bearing a Latin phrase:

Non omnis error stultitia est dicendus.

Not every error should be called foolishness.

What struck me was not just the phrase, but where it once lived. The banner hung in the fact-checking department of The New Yorker. An institution known for both imagination and rigor felt the need to make a clear distinction between different kinds of errors.

That distinction feels increasingly rare in modern organizations.

Most companies treat error as a single category. Something went wrong, therefore someone failed. In doing so, they collapse two fundamentally different activities into one: precision and progress.

Innovation rarely survives that collapse.

When Precision Arrives Too Early

Many organizations say they want innovation, yet they design work as if everything should be correct from the start. Ideas are expected to be fully formed. Experiments are judged by operational standards. Early initiatives are evaluated as if they were meant to scale, rather than learn.

The result is predictable. Exploration narrows. Risk moves underground. What remains is incremental improvement presented as innovation.

The banner from The New Yorker suggests something more disciplined. It does not celebrate error. It classifies it. Some errors signal carelessness. Others signal exploration. Treating them the same is not rigor. It is imprecision.

Sequencing Imagination and Rigor

One often overlooked detail is that the banner did not hang in the editorial room. It hung in fact-checking.

Editorial work existed to explore ideas, take intellectual risks, and test new perspectives. Fact-checking existed to apply discipline, accuracy, and verification. The two were distinct, yet tightly coupled.

Imagination came first. Precision followed.

They were not blended. They were sequenced.

Many organizations reverse that order. They demand precision before exploration has had a chance to do its work. When that happens, innovation does not fail loudly. It quietly disappears.

Innovation Is a Portfolio, Not a Bet

Design thinking frameworks reinforce this logic. Not all innovation is trying to accomplish the same thing.

Some efforts are incremental and should be precise and predictable. Others are adjacent, stretching existing capabilities. Still others are exploratory or disruptive, operating under high uncertainty with asymmetric upside.

The mistake many leaders make is applying the same evaluation criteria to all of them.

Exploratory work is judged as if it were incremental. Disruptive ideas are asked to justify themselves too early. Learning is mistaken for inefficiency.

Innovation is not a single initiative. It is a portfolio. And portfolios only make sense when outcomes are assessed at the system level, not project by project.

This is not just theory. Research cited by Richard Boyatzis in The Science of Change references a study by Spencer (1988) examining U.S. Army organizational effectiveness programs. Over a two-year period, roughly 80 percent of initiatives failed to meet their objectives. Yet the remaining 20 percent generated benefits that repaid the cost of the entire program multiple times over.

The insight was not that failure is good. It was that value is asymmetrically distributed, and success must be evaluated at the portfolio level.

What This Means for CEOs

For CEOs, this is not an abstract innovation debate. It is a design responsibility.

Most innovation failures do not originate in teams or ideas. They originate in how accountability is designed. When every initiative is expected to justify itself through near-term results, leaders unintentionally signal that exploration is unsafe.

This creates a paradox. Leaders ask for innovation, but reward certainty. They call for experimentation, but punish ambiguity.

The real question for CEOs is not whether they personally tolerate failure. It is whether their systems can distinguish between errors that reflect poor execution and errors that reflect learning.

That distinction shows up in:

  • how funding decisions are made
  • how progress is reviewed
  • which questions dominate governance forums
  • and when leaders intervene versus step back

If exploratory work must constantly explain itself in operational terms, it will either conform or disappear.

Designing for Both Progress and Precision

The Latin phrase on that banner is not an argument for tolerance. It is an argument for discernment.

Leaders who want innovation must design work that allows different kinds of effort to coexist without being judged by the same standards. Precision matters. But it matters later.

The real question is not “Why did this fail?”

It is “What kind of work was this meant to do?”

When organizations learn to ask that question, innovation stops being an act of faith and becomes a matter of design. Together, they reinforce a central idea: innovation fails less because of ideas, and more because of how learning and precision are sequenced and judged.

Bonus: Questions for Leaders Designing Innovation

  • What types of innovation are we actually funding today?
  • Where are we applying operational metrics to exploratory work?
  • Which errors lead to learning, and which indicate breakdowns in execution?
  • Do our governance forums create clarity, or premature convergence?

These answers usually reveal more than any innovation strategy deck.

Bonus: Further Reading on Innovation and Learning

If you’d like to explore the research behind these reflections, two resources are particularly helpful:

  • From IDEO: Don’t Throw Away Your Innovation Budget
    IIDEO’s work on innovation portfolios distinguishes between incremental, adjacent, and breakthrough innovation, emphasizing that each requires different evaluation criteria, time horizons, and tolerance for uncertainty.

Thiago Licias de Oliveira – Founder of Unmaze