AI Doesn’t Break Processes. It Reveals Them.
The fluorescent lighting problem
AI is a bright light in a messy room. Not because it is wiser than the organisation, and not because it “knows better” than the people doing the work, but because it changes visibility. Ambiguity becomes inconsistency. Tacit norms become contradictions. Inherited work becomes friction.
Because AI is the visible new element, organisations often reach for the simplest explanation: the model is the problem.
That explanation is usually convenient.
What is being blamed on AI is frequently pre-existing incoherence made legible. Conflicting definitions of “done.” Multiple versions of the same metric. Roles that exist in governance charts but not in practice. Approvals remembered only when something fails. Work that is formally owned by one team and operationally sustained by another.
AI does not create these conditions. It surfaces them.
When a workflow is internally consistent, AI can integrate with surprising ease. When a workflow depends on exception handling, personal memory, and informal accommodation, AI functions as a stress test. It produces outputs that are defensible in isolation yet wrong in context.
The response is framed as a technical defect.
The failure is organisational.
Most processes are not singular. There is an official version and an enacted version. The official version lives in policies and slides. The enacted version lives in adaptation: parallel spreadsheets, escalation shortcuts, selective rule-bending, and the judgment of people who understand which constraints matter and which can flex.
Many of these environments are not merely complicated. They are complex. Cause and effect are often only coherent in retrospect. Stability arises from informal constraints and adaptive discretion. What appears inconsistent from outside the system is often the system preserving equilibrium under pressure.
AI systems do not share that lived context. They operate on what is encoded, observable, or statistically inferable. They can detect behavioural traces and approximate regularities. They cannot inhabit the informal agreements that give those regularities meaning. Unless boundary conditions are translated into explicit constraints, the system acts within a thinner model of reality than the organisation actually occupies.
So when instructed to “follow the process,” it follows the legible one. And the legible one is often an abstraction that conceals how work is truly stabilised.
Early deployments therefore reveal a familiar pattern. The system acts reasonably according to defined rules yet misaligns with practice. It routes correctly but to the wrong queue. It escalates at the proper threshold, yet escalation itself is culturally discouraged. It flags anomalies in environments where anomalies are routine and socially understood.
The technical explanation feels safer.
The deeper issue is exposure.
AI confronts organisations with the gap between declared order and operational reality. Resistance rarely stems from hostility to the technology itself. It stems from discomfort at what becomes visible.
If the actual workflow is documented, it must be defended. If exceptions are recorded, they must be justified. If shadow work is acknowledged, it must be resourced or removed. If political trade-offs embedded in process design are surfaced, they must be owned.
Ownership is harder than optimisation.
When tacit constraints are surfaced and translated into deliberate boundaries, AI stops being a stress test and becomes an accelerant. Variation becomes designed rather than accidental. Escalation becomes structural rather than political. Delegation becomes accountable rather than hopeful.
Organisations usually possess an implicit awareness of how work truly functions. They understand that informal coordination, discretionary judgment, and recurring exceptions sustain performance. These mechanisms allow a narrative of order to persist.
AI does not preserve narrative comfort. It executes what it can see.
That is why it appears to “break” the process.
What it disrupts is the assumption that the process was coherent.
The evidence is rarely hidden. Parallel trackers maintained as insurance. Divergent definitions of identical metrics. Multiple sources of truth no one is willing to retire. Hand-offs that succeed only because a particular individual intercepts them. Exceptions left undocumented because documentation would create accountability.
These are not AI failures. They are characteristics of the operating model as it exists.
The decision facing leadership is straightforward and difficult. Continue attributing exposure to the model, or confront the inherited structure of work.
The only durable correction is structural. Not better prompts. Not additional guardrails layered onto inconsistency. Structural clarity.
That means identifying where variation is legitimate and where it is unmanaged. Naming informal agreements and deciding whether they deserve formal recognition. Clarifying ownership when reality diverges rather than when reporting does.
This is not anti-AI. It is the condition for using it responsibly.
AI amplifies underlying structure. Where coherence exists, it increases throughput. Where fragmentation exists, it increases instability. Where ownership is defined, it can be bounded. Where responsibility is diffuse, it introduces a new category of failure: technically correct action in the wrong environment.
Treating that as a tooling issue offers psychological relief. Tooling issues can be assigned and iterated. Organisational design issues require decision and consequence.
A bright light in a cluttered room is not an accusation. It is information.
The disorder predates the light.
The only question is whether visibility becomes blame, or redesign.


Interesting perspective. Does this lighting make me (my processes) look fat?
Stuart, your articles "AI Doesn't Break Processes" and "The Automation Ceiling" frame a debate happening right now in Legal AI:
Stefan Eder: Legal AI adoption is shallow because we haven't built proper infrastructure. Solution: knowledge graphs, compound AI, systematic expertise capture.
Carrie T.: "AI's problem isn't unpredictability - it's that it's too consistent to ignore the incoherence lawyers were managing through flexible interpretation."
Your "fluorescent lighting" metaphor nails it:
Eder thinks the mess can be structured. Carrie T. worries the "mess" is how legal systems actually function - managed ambiguity that can't be eliminated without breaking something essential.
Your "Automation Ceiling" suggests both are right:
Organizations that ask "what must be true before we scale?" distinguish:
Legitimate variation (to preserve) from unmanaged variation (to fix)
What should be formalized from what must stay flexible
I'm about to research this ethnographically: when legal orgs deploy AI, where does structural clarity help - and where does it inadvertently remove necessary interpretive space?
Your three "graduation questions" apply directly:
Economics: True exception rate including manual fixes?
Authority: Is escalation procedural or political?
Exit: Can people override AI without risk?
Do you see legal services as having constitutive complexity that resists full formalization - or mainly an organizational design challenge?