The AI Absorption Problem
Most organisations are not short of AI activity. They are short of absorption capacity.
Microsoft’s latest Work Trend Index is framed as a report about agents, human agency, and opportunity. That is fair. The report is optimistic, and not without reason. Workers are already using AI to analyse, draft, decide, coordinate, and extend what they can do.
But the harder finding sits underneath the optimism: people are moving faster than the systems around them.
The report says this plainly. In many cases, people are ready, but the systems around them are not. It frames the constraint as the gap between what employees can now do and what organisations are built to support. It also finds that organisational factors such as culture, manager support, and talent practices are associated with more than twice the reported AI impact of individual mindset and behaviour alone.
That is not a marginal finding. It is the centre of the problem.
Most organisations are no longer facing a simple adoption question. The issue is not whether AI is being used. It is not even whether AI has been placed into workflows. The harder question is whether the organisation can absorb what AI changes once capability starts moving through real work.
That is the AI absorption problem.
Adoption asks whether people are using AI. Deployment asks whether AI has entered the workflow. Absorption asks whether the organisation can carry the consequences.
That distinction matters because an organisation can have licences, pilots, internal champions, prompt libraries, agent experiments, usage dashboards, and leadership updates, while still failing to turn local gains into institutional advantage. Individuals can become more capable without the organisation becoming more capable with them.
That is the strange paradox now appearing inside many companies: AI can expand individual agency while exposing institutional immaturity.
A worker can become faster and more resourceful with AI. But if the organisation around them has not changed its incentives, decision rights, quality standards, review paths, data access, supervision model, and proof expectations, that gain remains trapped at the edge. It may help the person. It may help the team. It may even create local productivity. But it does not yet become institutional capability.
This is where a lot of AI programmes start to feel busy but strangely uncompounding.
The activity is real. The benefit may be real. The local improvement may be worth having. But the organisation cannot always explain what changed in the work, where the review burden moved, what authority widened, what new risk appeared, what evidence proves the gain, or what can now be standardised.
This is the sociotechnical problem hiding inside a lot of AI optimism. Speeding up one part of the system does not automatically improve the system. It can simply push more output into the next bottleneck: review, approval, interpretation, data access, customer response, compliance, management attention, or downstream execution. The organisation feels acceleration, but the constraint has only moved.
The constraint always moves.
You can see this in the meeting where every function can point to progress and nobody can explain the whole operating change. HR has training numbers. IT has licences. Product has experiments. Finance has a benefits assumption. Risk has concerns. Managers have workarounds. The slides say momentum. The execs know something has not yet become governable.
Stanford’s recent enterprise AI report points in the same direction from the other side. It studied successful deployments rather than general workforce readiness, but the lesson is strikingly similar. The technology was not usually the hardest part. The hard work sat in process redesign, change management, data access, executive sponsorship, iteration, and the invisible labour required to make AI survive contact with production.
Microsoft names the gap between employee capability and organisational readiness. Stanford shows what successful organisations had to do to close it. The same technology, the same broad use cases, and wildly different outcomes. The difference was not model power. It was the organisation’s ability to redesign work, clear blockers, handle resistance, document processes, build feedback loops, and keep learning after earlier failure.
Anthropic has been saying the same thing in a different register. In its financial services briefing, the point was explicit: the technology can already do a great deal, but “the actual organisation’s ability to digest and absorb it” is where the gap tends to be. That matters because this is not a sceptical outsider’s complaint. It is coming from one of the frontier labs trying to commercialise the capability.
This is also why Anthropic is leaning into something closer to a forward-deployed model. The lab cannot simply ship frontier capability and wait for large enterprises to transform. It needs people close to the customer, inside the workflow, helping translate capability into operating change. That is the old Palantir lesson in a new AI register: the product is not just software. It is software plus embedded judgement, workflow proximity, and institutional rewiring.
Even the people building the models are running into the same constraint: capability does not commercialise itself. It has to be absorbed into work.
In other words, the differentiator was always absorption capacity.
This is also why the current agent conversation can be misleading. Agents make the problem sound like a software capability issue. Can the system act? Can it complete a workflow? Can it use tools? Can it coordinate multiple steps? Can it behave like a teammate rather than an assistant?
Those questions matter, but they are deeply insufficient on their own. The real question is whether the organisation is ready for what delegated action changes.
Once AI systems begin acting inside workflows, the organisation needs more than enthusiasm. It needs evaluation infrastructure, identity, permissions, monitoring, auditability, lifecycle management, policy enforcement and, in practical terms, credible ways to intervene when the system should narrow, pause, or stop.
That is another irony in the Microsoft report. The more agentic the system becomes, the less credible it is to treat AI as a lightweight productivity layer. Agents promise more autonomy. But autonomy increases the need for explicit control.
That is not a contradiction. It is the condition of safe delegation. The moment a system can act on behalf of the organisation, the organisation must be able to say what it is allowed to do, what it is forbidden to do, what must be reviewed, what gets logged, who can revoke authority, and how quickly intervention works when reality does not match the plan.
The agent may be new. The responsibility is not.
Most organisations already have delegation systems. They are made of roles, permissions, approvals, escalation paths, audit trails, managerial judgement, and informal human repair work. Agents do not remove that system. They stress-test it. They reveal where delegation was already vague, where ownership was social rather than explicit, and where “someone will check it” was the control plan.
This is where absorption becomes more useful than adoption as an executive frame. Adoption can be measured by usage. Absorption has to be judged by changed capacity.
Can the organisation perform a workflow better under real conditions, explain what changed, price the supervision burden, preserve the learning, and decide what should scale or stop? Those are not software questions. They are operating questions.
What work has actually changed? Which bottleneck moved? Which handoff became cleaner? Which approval became unnecessary? Which review burden appeared elsewhere? Which decisions are now being shaped by AI? Who owns the outcome when the system is wrong? What evidence would make us scale, narrow, or stop? What has the organisation learned that can now travel?
That last question is often the missing one.
An organisation does not become more capable simply because people use AI. It becomes more capable when learning travels. When one team’s discovery becomes shared infrastructure. When a pilot produces evidence, not folklore. When a workflow improvement becomes a reusable pattern. When governance is not a late-stage veto, but part of how the organisation earns speed.
Microsoft explicitly says the organisations pulling ahead are focused on “AI absorption rather than just AI adoption,” and calls the result a Learning System. That phrase matters. It shifts the emphasis away from tool access and toward organisational metabolism: how quickly the organisation notices what is working, captures it, codifies it, diffuses it, and improves the next deployment.
That is absorption in practice: not everyone becoming an AI expert, but the organisation becoming better at turning scattered use into shared operating knowledge.
Otherwise, the business ends up with private productivity and public confusion. Individuals get faster. Teams improvise. Local experiments proliferate. But the organisation cannot see the whole pattern clearly enough to decide what deserves investment, what requires redesign, what should be standardised, and what should be stopped.
That state can look impressive from the outside. It may produce plenty of internal noise. But it is fragile because the knowledge stays local, the controls stay inconsistent, and the economics remain difficult to defend.
This is not a call for centralised bureaucracy. That would be the wrong lesson. Heavy governance can kill learning just as easily as weak governance can lose control. The point is not to smother local experimentation. It is to make local learning portable.
There is a difference between experimentation and institutional learning: experimentation produces activity, while institutional learning changes what the organisation can reliably do next.
When absorption works, the organisation does not become slower or more bureaucratic. It becomes easier to change. Local experiments produce reusable patterns. Review work stops multiplying invisibly. Managers spend less time stitching together machine output and more time improving the system. Boards get clearer evidence. CFOs get cleaner cost lines. Teams get permission to use AI more ambitiously because intervention, learning, and accountability are no longer improvised after the fact.
The organisation can widen authority with confidence because it knows how to narrow it again.
The organisations that get this right will not be the ones that simply push harder on usage. They will build the conditions under which AI can be carried. They will define the work before automating it. They will make decision rights explicit before widening authority. They will instrument workflows before claiming productivity. They will treat supervision as real work. They will design escalation and stop paths before they need them. They will ask where human judgement becomes more important as execution becomes cheaper.
As AI takes on more execution, human judgement does not disappear. It moves.
It moves upstream into intent, taste, framing, system design, and the choice of what should be delegated at all. It moves into quality control, exception handling, escalation, and the ownership of outcomes. It moves into the uncomfortable work of deciding when a system that appears useful is not yet entitled to scale.
The optimistic version of AI is not that humans do less thinking. It is that organisations stop wasting human judgement on work that never deserved it, while becoming more deliberate about the judgement that remains.
But that only happens if the organisation redesigns around the shift. If it simply drops AI into existing workflows, the result is often more output into the same constraints. More drafts to review. More summaries to interpret. More generated material to reconcile. More decisions that look finished before they have been properly owned.
That is not absorption.
That is acceleration without assimilation.
The practical implication for leaders is simple, though not easy. Stop treating AI maturity as a question of how much activity exists. Start treating it as a question of what the organisation can now carry.
Boards, CFOs, and executive teams should sharpen the conversation. Do not only ask how many people are using AI. Ask what work has changed. Do not only ask how many pilots are running. Ask which ones have earned the right to continue. Do not only ask whether agents are being tested. Ask what authority they have been given, what evidence they produce, and who can narrow that authority when conditions change. Do not only ask whether productivity has improved. Ask where the cost, review, risk, and responsibility moved.
These are absorption questions. They are less glamorous than adoption metrics, but they are much closer to value.
The organisations that win will not necessarily be the ones with the highest AI usage, the largest number of experiments, or the most aggressive internal mandates. They will be the ones that build absorption capacity faster than their competitors. They will turn local gains into operating knowledge. They will convert experiments into reusable patterns. They will know when to widen authority and when to stop. They will be able to move faster because the slow work of redesign, proof, and governance has already been done.
That may sound less exciting than the usual agent story, but it is where the money is.
Most organisations are not short of AI activity. They are short of absorption capacity.
That is the divide now opening. Not between organisations that use AI and those that do not, but between organisations that can turn scattered capability into operating knowledge and those that remain trapped in private productivity and public confusion.
AI does not become valuable when it enters the organisation.
It becomes valuable when the organisation can carry what it changes.
◆
I advise organisations on what AI work should scale, narrow, redesign, or stop before activity hardens into cost.


Vanity productivity measured by overspending or another way to look at it, a humans skillset shortage