The Misfit Language of AI
How Taylorism’s Ghost and AI’s Anthropomorphic Lexicon Keep Us Trapped in the Language of Certainty
We built machines that sound human to make their uncertainty feel safe, and in doing so, made our own thinking mechanical.
“The real problem is not whether machines think but whether men do.” - B.F. Skinner
1. The Age of Certainty
We are a civilisation addicted to certainty.
You can feel it in every boardroom, every budget cycle, every KPI review. The language of business still moves to the rhythm of prediction: targets, forecasts, and deltas. We pretend the future is a spreadsheet with missing cells, waiting to be filled.
The habit runs deep. Its roots go back to Frederick Winslow Taylor, the early-twentieth-century engineer who promised to make work scientific. His “one best way” philosophy transformed labour into data and management into measurement. Observation became surveillance; judgment became process. Efficiency was moralised. Waste became sin.
Taylor’s ideas suited the mechanical world they were born into. Cause and effect were visible, production was linear, and optimisation worked. But the logic metastasised. We built whole institutions on the assumption that control equals competence, that efficiency is not one kind of value but the only kind that matters.
Even now, Taylorism’s ghost shapes how we speak. Phrases like streamlining, best practice, KPIs, and time to value all carry its DNA. The spreadsheet became our scripture; certainty, our creed.
2. When Statistics Met the Stopwatch
Then came GenAI, a probabilistic system arriving in a deterministic world.
To fit inside the grammar of the enterprise, AI had to sound controllable. And so it was translated into the familiar idioms of efficiency: automation, cost reduction, productivity gain. The same Taylorist reflexes reasserted themselves.
But underneath, the machinery had changed. These models are not deterministic engines but statistical mirrors, reflecting distributions rather than reasoning about them. They operate in probability space, not in the domain of fixed outcomes. They generate distributions of likelihood, not chains of causality. Their intelligence is approximate by design.
Executives who grew up on Six Sigma and TQM suddenly faced systems that behave more like weather patterns than machinery, patterns to be navigated, not processes to be engineered. The management reflex is to demand the old kind of predictability from a new kind of uncertainty.
3. The Anthropomorphic Exception
To make this strange new behaviour legible, we reached for metaphor. And we reached for the oldest one we know: ourselves.
We began to call these systems agents, reasoners, thinkers, learners. We said they had memory and intent, that they could hallucinate or understand.
In technical terms, none of this is true.
But linguistically, it worked.
Anthropomorphic language restored familiarity. It allowed engineers, marketers, and the public to talk about something probabilistic as if it were psychological. It replaced statistical behaviour with narrative causality.
Every mature science, as it develops, invents a vocabulary to purge human metaphor from its explanations.
– Biology displaced the “vital spirit” of vitalism with metabolism and homeostasis.
– Physics replaced Aristotle’s notion of motion by “will” with the quantitative idea of momentum.
– Computer science replaced “mechanical thought” with the formal concept of computation.
Generative AI reversed the trend. It imported human metaphors back into the machine. It made mathematics sound sentient.
The result is a semantic inversion: we now describe simulation in the language of cognition. We confuse pattern with perception, imitation with intention.
4. Certainty as Comfort
Why does this misfit language persist? Because it soothes us.
Anthropomorphic metaphors turn probabilistic fog into familiar story.
“The model decided.” “The agent learned.” “The system wanted.”
Each phrase hides the true machinery - a calculus of probability and feedback - behind verbs of intention.
In a sense, it’s a social technology. Anthropomorphism gives executives and engineers a shared fiction through which to coordinate. It turns probability into personhood. It allows a CFO to talk to a data scientist in a language both can pretend to understand.
But there is a cost. When we grant systems the appearance of agency, we encourage ourselves to believe they can be managed like people, with rules, incentives, and compliance frameworks. We flatten complexity into governance checklists. We replace epistemic humility with corporate hygiene.
5. The Pixar Problem
A story from another industry captures the pattern perfectly.
When John Lasseter first pitched computer animation at Disney, he was asked whether it would make movies faster or cheaper.
He said no, but it would let them tell stories in ways never told before.
Disney fired him.
He built Pixar.
Years later, Disney bought it back for $7.4 billion.
That’s the grammar of certainty at work: a system trained to hear “faster” and “cheaper” as the only acceptable answers to a question about technology.
And it’s the same question we’re asking AI today.
Will it make what we already do faster or cheaper?
Or will it let us do things we could never do before?
The first protects the old story.
The second writes a new one.
6. The Economics of Reduction
The linguistic distortion meets its economic twin in the boardroom.
Organisations addicted to certainty translate every new technology into a cost story. Cost can be forecast, proven, and defended. Value creation, by contrast, is diffuse, delayed, and politically fragile.
So the anthropomorphic machine and the cost-reduction enterprise collude.
The machine pretends to think; the organisation pretends to understand.
Both are rewarded for the appearance of control.
This is how potential becomes theatre. The true creative frontier of AI - using probabilistic tools to explore, coordinate, and reveal new value - is subordinated to the moral safety of efficiency.
We destroy value not through incompetence but through semantics.
We describe creativity in the only vocabulary our accounting systems can process: optimisation. When the lexicon of life is efficiency, imagination becomes an accounting error.
7. Toward a New Grammar of Value
To escape this loop, we need a new lexicon, one suited to complexity rather than control.
In this language, uncertainty is not a defect but a medium.
Friction is not waste but information.
Coordination is not overhead but creation.
The architectural metaphors - flows, feedback, constraints, alignment - offer a more truthful grammar. They describe systems that are adaptive without being anthropomorphic, dynamic without pretending to be sentient.
In this grammar, value is no longer the absence of cost; it is the presence of coherence.
A system creates value when its parts can respond to one another without collapsing into uniformity, when diversity produces stability rather than disorder.
As both cybernetics and complexity science remind us - from Norbert Wiener’s feedback loops to Dave Snowden’s sense-making frameworks - coherence in living systems arises not from command but from iterative interaction. The same applies to organisations designing with AI inside complex environments: coherence emerges through continual feedback and adaptation, not through control.
8. After Taylor, After Us
A century ago, Frederick Taylor’s stopwatch made labour measurable.
Today, language models make cognition imitable.
Both seduce us with visibility: the belief that precision equals mastery.
But mastery is the wrong aspiration.
In a world of complex systems, the goal is not control but coherence.
And coherence demands a different kind of intelligence, one that can live with uncertainty without mistaking it for failure.
The companies that will create value with AI will be those that abandon the industrial dream of certainty, who see management not as prediction but as navigation.
To do that, we must learn to speak differently:
to design languages that can hold ambiguity without collapsing it into cost or character,
to measure progress not in reduction but in relationship,
to recognise that in complexity, clarity is born not from control but from conversation.
Taylorism gave us the dream of a perfectly knowable world.
GenAI returns us to a world that is only knowable in parts.
If we can accept that - if we can build systems, metrics, and cultures that thrive amid the partial - then we may finally start creating value worthy of the name.
We don’t need certainty to move forward.
We need the courage to build inside uncertainty.
—
Stuart x
More on my work, podcast, and writing at unhyped.pro


Thanks for sharing this post, Stuart! I agree that thinking about AI purely in terms of efficiency makes us blind toward the potential of this technology. But there's another way: Thinking about AI in terms of opportunities. I called this the "Henry Ford moment" in AI: Henry Ford did not invent any of the technologies that made him such a successful business man: the conveyor belt, electricity etc. But he re-arranged these technologies in such a way that he reinvented industrial production. That is the opportunity we have today. https://workcode.substack.com/p/automation-and-imagination-the-futures
Spot on ... Uncertainty demands a different mindset