The Agentic Operating Model
What has to be true before autonomy can scale
There is a particular kind of disorder spreading through companies right now, and it does not look like failure at first.
It looks like motion.
Licences are opened at speed because everyone is using the tools. Teams start experimenting in parallel. An engineer builds something in an afternoon and it looks incredible. Another wires up a small internal app and, for half an hour, feels like they have changed the company. Somewhere else, someone connects to an API they barely understand, gets a plausible answer back, and mistakes fluency for reliability. A second team builds something similar without knowing the first exists. A senior executive, under pressure to take a view, starts using AI to think about AI because everyone else seems to be moving already and nobody wants to be the person still standing on the platform while the train leaves.
From the outside, this looks like momentum.
From the inside, it often feels like excitement and waste arriving together.
That is the mood inside a lot of organisations now. Not ignorance. Not laziness. Not bad intent. Pressure. FOMO. Fragmented experimentation. A creeping sense that something significant is happening, combined with an equally strong suspicion that much of what is happening is brittle, duplicated, expensive, or aimed at the wrong problem.
That is why I keep writing pieces like this.
Not because I enjoy being the person who drags the conversation back to constraints. I don’t. In some easier universe, I would be the person whose only job was to point at the dazzling bit. But judgement improves the closer you stay to what is actually there: the real workflow, the real costs, the real ownership, the real failure modes, the real limits of the institution you are asking to carry the change. Step too far from that and it becomes very easy to confuse motion with progress and novelty with value.
That confusion sits right at the centre of what people currently call agentic AI.
The phrase itself already misleads. It makes the problem sound cognitive, as if the hard part is getting the system to behave intelligently enough. As if the real drama sits inside the model.
In live organisations, the hard part is rarely intelligence.
It is permission and coordination.
Permission, because someone has to hold the right to act, the right to override, the right to pause, the right to ask the awkward question when everyone else wants the room to stay excited. Coordination, because work in real firms is not a sequence of neat tasks waiting for intelligence to arrive. It is handoffs, timing, interpretation, exception handling, dependencies, side effects, and the endless labour of keeping one part of the machine from flooding the next.
That is why so much of the current AI story feels unreal. The popular image is still one of isolated brilliance at scale: the model writes, reasons, plans, summarises, generates, automates. But enterprises do not run on isolated brilliance. They run on who talks to whom, who waits for whom, who corrects whom, who absorbs the consequences when one team’s local win becomes another team’s downstream mess.
Most firms survive on a great deal of tolerated vagueness. Ownership is supposedly clear. Escalation paths are supposedly known. Decision rights are supposedly understood. Exceptions are supposedly handled. And for long stretches of time, that is enough, because people absorb what the formal system never resolved. They recognise when a number is wrong in a way the spreadsheet cannot explain. They know when a process has worked on paper but failed in spirit. They patch meaning back into the workflow at the boundary between teams. They stop small incidents becoming visible incidents.
That human repair work is not a nice extra.
It is part of the operating model.
Then delegated action enters the picture. Not software that waits for attention, but software that continues after attention has moved elsewhere. A system that keeps carrying intent forward while the person who set it in motion is already in another meeting, already answering another message, already assuming somebody else will catch it if it drifts, or has left the company altogether and taken the context with them.
This is the point at which tolerated vagueness becomes dangerous.
Because the question the organisation could once postpone now becomes immediate: who is allowed to interrupt the machine, and who is responsible for everything it touches while it is still moving?
That is not really a governance question, at least not at first. It is a legitimacy question. It is about whether the institution has authority in the right place, at the right speed, with enough protection around the person exercising it, to act when acting is inconvenient.
This is why the first serious agent incident rarely feels like a lesson in model performance. It feels like a lesson in the organisation itself. The technical problem is often the easiest part of the story. The harder part is the meeting afterwards: the pause in the room, the sideways glances, the dawning realisation that nobody was quite sure who had the standing to narrow the system’s authority quickly enough. Not because nobody cared, but because the firm had never properly decided what interruption was allowed to cost socially.
A tool can live inside fuzzy authority for quite a long time because it waits. It stops when the human stops. The work remains socially held.
Delegated systems change that rhythm. They do not simply accelerate the workflow. They accelerate the consequences of whatever ambiguity was already there.
That is why so many AI programmes feel noisy before they feel useful. They generate activity faster than they generate coherence. They produce local wins that create downstream burdens. They increase output in one place and interpretation load in another. They make cleverness visible before they make responsibility visible. They tempt people into believing that because something can be built, it is therefore ready to belong.
It usually isn’t.
And this is where the phrase agentic operating model starts to matter, though not in the dead way strategy decks often use it.
An operating model is not a diagram.
It is the set of conditions that lets delegated action exist inside an institution without constant confusion over who is responsible, who is watching, and who can intervene. It is what stops the enterprise from mistaking improvisation for architecture. It is what separates a demo from a system the organisation can actually live with.
In the tool era, legitimacy was cheap. Humans were the control surface. Most correction happened by default because people were still holding the work together.
In the agent era, legitimacy becomes expensive. Intervention has to be designed. Supervision has to be real. Authority has to be bounded at runtime, not merely described in a document nobody reads until after the near miss. Coordination can no longer depend on memory, goodwill, or the capable person who always spots the problem just in time. The organisation has to decide, explicitly, where the seam is, where the handoff is, where the burden moves, and who absorbs the cost if the system is wrong.
This is the deeper shift. The organisation is no longer just deciding whether to use AI.
It is deciding whether it is actually capable of delegation.
That is a far heavier question than most AI discussions allow themselves to admit, because delegation creates an obligation that survives the moment of enthusiasm. Once the system is live, you are responsible not only for what it can do, but for what it continues doing after human attention has moved on.
That is why I do not think the main problem here is fear.
The problem is scale.
Autonomy does not scale on excitement. It scales on conditions.
And when those conditions exist, something very different becomes possible. Approval gets faster, not because risk disappeared, but because authority has become legible. Trust gets cheaper, not because everybody feels optimistic, but because intervention is no longer a dramatic act of social defiance. The organisation can widen delegated action without turning every expansion into a political event. The board can underwrite progress without pretending uncertainty vanished. Teams can stop improvising permission. Leaders can stop making every decision from scratch.
That is where the real wow is, if we are being honest.
Not in the app someone vibecoded before lunch.
In the institution becoming capable of allowing more without becoming less coherent.
That is what an agentic operating model really buys. Not magic. Not inevitability. Not the right to stop thinking.
It buys the possibility that progress survives contact with the enterprise.
And that is a much rarer achievement than the market currently admits.
Make that real while stopping is still cheap.
Then widen authority.
This is solvable. But it must be owned.
--
Stuart Winter-Tear
Independent advisor | Author of UNHYPED
Advisory: Executive Calibration


So true- so many years of process improvement post mortems - but they said they were following the procedures, the managers signed off the model…. AI Projects are now providing irrefutable evidence of the messy reality, value of tacit knowledge and the trade off between transparency that exposes uncomfortable truths vs effective on boarding of operational agents with massive ROI.
My takeaway:
Don’t just build the agent. Build the kill switch, the audit log, and the escalation path. That’s what turns a demo into a system the company will actually let run.