Before You Build: A Decision Surface for AI Initiatives
How to decide whether an AI initiative should proceed - before momentum takes over
Most AI initiatives fail long before code is written.
Not because the models are weak.
Not because the tools aren’t capable.
But because the organisation never made the decision it thought it had made.
What job is actually changing.
Who owns the outcome.
What the system is allowed to do without asking.
What failure looks like, and who pays for it.
How we would stop it if it goes wrong.
These questions are usually skipped, blurred, or postponed.
Momentum replaces judgement.
Pilots stand in for decisions.
And by the time reality intrudes, too much has already been built to stop.
That pattern is now familiar enough to name.
The problem isn’t intelligence. It’s premature action.
Over the last year, I’ve watched organisations rush into “agentic” systems, copilots, and automation initiatives with remarkable speed. Often with capable teams and serious intent, but without a decision anyone could later point to.
What they often lack is not ambition or technical skill.
It’s a shared understanding of what they are actually committing to.
AI changes workflows.
It changes accountability.
It changes risk distribution.
Treating those changes as implementation details rather than decisions is how trust gets eroded and ROI disappears.
So instead of writing another essay or framework, I built something deliberately unexciting.
A decision surface, not a strategy
I’ve published a Decision-Forcing AI Business Case Canvas.
It exists for one purpose only:
To decide whether an AI initiative should proceed - before vendors, pilots, or delivery plans.
Not to design the solution.
Not to justify a choice already made.
Not to accelerate execution.
To decide.
The canvas asks questions most organisations delay until it’s too late:
- What job is actually changing hands, end-to-end?
- Where does this intervention stop?
- Who is the single accountable human owner?
- What is the system allowed to do without asking, and what is forbidden?
- How could this fail, and how widely?
- What evidence would we need after an incident?
- Who can shut it down, and how fast?
If these can’t be answered honestly, the instruction is simple:
Pause.
Discomfort here isn’t a failure.
It’s information.
Why make this public?
Because this is upstream work.
Most artefacts in this space help teams plan execution.
This one exists to decide whether execution should happen at all.
I’ve seen too many teams discover the cost of speed too late.
Publishing this is a way of forcing better conversations earlier, without selling anything or capturing anyone.
It’s free to use internally.
It’s not a methodology.
It doesn’t replace judgement or governance.
It just makes the absence of those things visible.
Two documents, two roles
There are actually two artefacts:
- The Canvas: for orientation and decision.
- The Working Surface: for recording commitments once a decision is made.
One provokes clarity.
The other records accountability.
Neither tells you what to build.
Both make it harder to pretend you’ve decided when you haven’t.
What this is - and isn’t
This isn’t about slowing down.
It’s about not discovering the cost of speed too late.
It won’t suit every organisation.
It will frustrate teams looking for reassurance or momentum.
But for leaders who are accountable for outcomes, risk, and trust, it creates a different starting point, one where judgement comes before tooling.
That’s the only place durable value seems to emerge.
The canvas
You can find the Decision-Forcing AI Business Case Canvas here:
Use it. Share it. Argue over it.
If it forces you to stop, that’s not a problem.
That’s the work.

