The Knowledge We Never Had to Explain
AI, judgment, and the cost of decisions we let remain informal
Most organisations run on decisions that work only because certain people just know how things are done. Not rules or policies, but judgment. The accumulated sense of timing, risk, and appropriateness that comes from being around long enough.
It’s the raised eyebrow in a meeting, the pause before saying yes, the instinctive ‘this one’s different’ that never makes it into a document.
Not because it couldn’t be written down, but because no one ever had to make it defensible.
This knowledge doesn’t announce itself. It lives in experience, relationships, and shared history. It works precisely because it is informal, human, and situational. It’s the handwritten margin note in the organisation’s mind, the part everyone relies on and no one cites.
And for a long time, that was enough.
Until we asked AI to participate. Not as a tool we consult, but as something embedded in workflows, in agentic systems that don’t just suggest but act. AI forces the implicit to become defensible, because it can execute what you give it at scale.
This isn’t an argument against AI. It’s an argument for making decisions defensible before we delegate them.
For years, this wasn’t a technical problem. It was a social one.
Judgment moved through organisations the way navigation once did at sea: guided by charts, yes, but ultimately dependent on seasoned hands reading conditions no map could fully capture. When something went wrong, a person absorbed the ambiguity. When something nearly went wrong, someone adjusted course the next time.
Risk was carried by people, not systems.
Nothing forced this knowledge to be explicit. Nothing required it to be defended. Nothing demanded that anyone say, out loud, why this exception was acceptable and that one wasn’t.
AI changes that. Not because it is smarter, but because it removes the buffer.
It does not hesitate in the way humans do. It does not glance around the room. It does not feel the weight of precedent or reputation. When we give it a decision, it repeats it. Faithfully. Indifferently. At scale.
Most organisational systems are very good at recording outcomes. They log what happened, when it happened, and who approved it.
They are far worse at recording why an exception was allowed, who truly owned the risk, or what conditions would have made the same decision unsafe next time.
That gap stayed invisible while humans carried the load.
It becomes intolerable when we delegate action to systems that repeat us perfectly.
This is uncomfortable, but it’s also clarifying. It’s the first time many organisations have been forced to say what they mean.
What begins to fracture here isn’t technology. It’s the myth organisations told themselves about how their decisions worked.
What we often call tacit knowledge sounds noble, as if it were wisdom too subtle to be captured.
Sometimes it is.
But a lot of the time, it’s something else entirely.
What we call tacit knowledge is often just responsibility that was never claimed.
Decisions weren’t written down because no one was required to stand behind them. Exceptions survived because accountability was shared, vague, or conveniently absent. Judgment worked because it could remain personal, contextual, and reversible.
AI doesn’t remove judgment. It exposes whether it was ever owned.
Suddenly, someone has to say what the organisation actually believes, not once, but continually. Someone has to define when an exception is legitimate and when it becomes dangerous. Someone has to own the consequences when a decision is executed a thousand times without pause or reconsideration.
This is why so many AI efforts feel brittle, even when the technology works.
AI doesn’t fail because it lacks context. It fails because we never agreed on the reasons we were willing to give.
What breaks isn’t the model. What breaks is the fiction that our decisions were ever neutral, obvious, or universally understood.
The instinctive response is to try to capture everything. To document judgment. To encode nuance. To turn lived experience into rules.
That instinct is understandable. It’s also a mistake.
Not all judgment should be automated. Some discretion cannot be defended at scale without changing its nature. Some decisions only work because a human is present, accountable, and able to reverse course.
The harder move is refusal.
In mature organisations, refusal is not hesitation; it’s an assertion of control over what the organisation is willing to be held accountable for.
Refusing to delegate decisions that cannot yet be named. Refusing to delegate where judgment exists only because no one was forced to articulate it. Refusing speed until accountability is explicit and owned, because this is not about caution for its own sake, but about legitimacy.
AI has a way of surfacing truths organisations were able to avoid while humans compensated for weak structure. It doesn’t create the problem.
It reveals what we were relying on people not to ask.
The pressure many organisations feel right now isn’t really about keeping up, or falling behind, or missing the next wave.
It’s about accountability finally becoming unavoidable.
And that discomfort isn’t a sign that something has gone wrong.
It’s the feeling of decisions, at last, becoming real.
And that’s not a reason to rush. It’s a reason to slow down long enough to name what you believe, where you allow exceptions, and who carries the consequences when AI repeats you perfectly.
If the question “which of our decisions are we actually willing to stand behind once they’re executed at scale?” resonates, I’ve published an AI decision-forcing canvas to determine whether delegation is legitimate before building.


This reframe of tacit knowledge as unclaimed responsibilty is sharp. In my experience alot of exceptions that got labeled as "context-dependent judgment" were really just decisions no one wanted traceable back to them. AI doesn't break that pattern, it just makes the avoidance visible.