Don’t Rush. Don’t Delay.
On decision, execution, and the cost of getting locked in
Most leaders I speak to feel caught between two pressure points, a tension heightened by the current phase of AI adoption, where expectations are racing ahead of decision-making structures.
Move too fast and you’re reckless.
Move too slowly and you’re irrelevant.
AI discourse collapses this into a crude binary: ship or stall. Act or hesitate. Build or fall behind. It’s an understandable framing, but it becomes corrosive once early enthusiasm meets operational reality, when teams have already moved fast, results are uneven, and leaders are left holding decisions that are hard to reverse and even harder to justify after the fact.
This pressure is felt most acutely by those who actually carry the downside: the people accountable for risk, capital, and operational consequence. When things go wrong, it is not the narrative that absorbs the impact, but the balance sheet, the operating model, and the organisation itself.
If that tension feels familiar, it is not a personal failure. It is a predictable response to scrutiny, uncertainty, and the demand to look decisive.
It’s also why motion feels so seductive.
Execution relieves pressure. Delivery creates the sense that something is being handled. Pilots, demos, and visible progress provide psychological cover when scrutiny is high and uncertainty is uncomfortable. Movement signals seriousness. Stillness feels exposed.
That dynamic doesn’t emerge by accident.
In many AI programmes, the strongest voices for speed are also those least exposed to downstream operating debt. Delivery is rewarded. Momentum is visible. The cost of rework, governance debt, and operational drag shows up later, often on someone else’s ledger.
There will almost always be people in the room pushing for speed. Often they are external. Often they are articulate. Often they are persuasive. And just as often, they are insulated from the hard yards that follow.
When the conversation turns from demos to workflows, from pilots to operating reality, from “what’s possible” to “who owns this when it breaks,” the energy drops. That work is slower, messier, and less visible. But it’s where the money actually is.
The real risk in AI is not speed.
The real risk is irreversibility without ownership.
Some decisions don’t just move a roadmap forward. They harden structures. They lock in workflows, incentives, accountability, and power. Once embedded, they are politically and operationally expensive to unwind. What looks like momentum at the surface becomes gravity underneath.
This is where many organisations get trapped.
They rush execution while postponing decision.
They launch pilots without resolving intent.
They explore use cases without naming value.
They experiment without assigning ownership for consequences.
Motion substitutes for commitment, and activity creates the illusion of safety.
What looks like progress is often deferred accountability.
This is the distinction that gets missed, and it is missing from far too many AI conversations that claim to be “strategic.”
There is a difference between delaying execution and delaying decision.
Delaying execution can be wise.
Delaying decision rarely is.
Most organisations do the opposite. They accelerate build activity while deferring the harder questions. Who owns the outcome. What trade-offs are acceptable. What must be true for this to be justified at all. Those questions don’t disappear. They just get answered later by architecture, incentives, and default behaviour in production systems - procurement gates, reporting structures, compliance obligations, data contracts, operating metrics that become fixed.
Urgency doesn’t have to come from fear.
It can come from responsibility.
The question isn’t “will we miss out?”
It’s “what becomes locked in if we get this wrong?”
That reframing changes everything. It moves urgency upstream, to the moment where change is still cheap and decisions are still reversible. It replaces FOMO with stewardship.
This is the paradox most teams miss.
A small amount of upstream work is not what slows you down.
It is what allows you to move fast without regret.
But only if that upstream work produces real commitments - named owners, explicit trade-offs, and clear preconditions - not more workshops, decks, and optimistic ambiguity.
Upstream is not about caution for its own sake. It’s about refusing false linearity. It’s about forcing clarity before systems calcify. It’s about making leaders decide what must be true before anything is built, rather than discovering too late that those conditions were never in place.
Calm here is not passivity. It’s signal.
The same pattern shows up in how we work as individuals. Frenetic motion feels productive, even virtuous. But leverage rarely comes from speed alone. It comes from owned decisions, made deliberately, at the right moment.
AI doesn’t punish hesitation.
It punishes unexamined commitment.
The most responsible move is often neither “go” nor “wait”.
It’s decide - properly - while you still can.


When AI takes over data, information, and knowledge, what remains to us is wisdom.
This nails the part most AI conversations skip. Irreversibility is the real risk. Once decisions settle into architecture and incentives, they stop feeling like choices at all, and momentum turns into gravity.