Agents Inherit Your Operating Model
AI has made people faster. But it hasn’t made companies better.
Atlassian’s new report “The AI Collaboration Index” lays it out plainly: daily AI use has doubled in the past year, knowledge workers say it makes them 33% more productive, and executives rate efficiency as the top benefit. Yet 96% of organisations have not seen any meaningful transformation in efficiency, innovation, or work quality.
The Fortune 500 alone could be losing $98 billion annually by mistaking “more tasks completed” for “better outcomes.”
This is the productivity pitfall. AI has been deployed to accelerate individual throughput, but accelerating throughput in isolation is just spinning one cog faster in a broken machine. Silos persist. Workflows remain fragmented. Goals remain disconnected. Instead of creating flow, AI is simply accelerating friction.
Agents don’t invent coordination
Matthew Skelton, co-author of Team Topologies, put it bluntly:
“Orgs that have worked out how to empower autonomous groups of humans… are already in the best place to use Agentic AI.”
Flip it, and the point sharpens: if your organisation has never truly empowered humans with clarity and agency, you are not ready to unleash Agents.
Agentic AI is a mirror. It does not invent coordination, clarity, or trust. It reflects whatever is already present.
If your teams don’t know who owns what, neither will your Agents.
If your people can’t navigate ambiguity with sound judgement, your Agents will flail.
If your workflows are reactive, overstretched, or misaligned, your Agents will inherit those dysfunctions, only faster.
And this is the irony. Many companies dream of autonomous AI Agents, while denying autonomy to their human teams. They want machines to act with judgement after spending years stripping judgement from people through bureaucracy, over-specification, and fear of failure.
They crave Agents who own outcomes, but won’t give their people the power to say “no.”
The wiring under the board
What enables Agents is exactly what enables effective human teams:
Bounded autonomy - freedom to act, but with clear limits.
Stable interfaces - consistent ways of handing work across boundaries.
Domain-aligned ownership - clear responsibility for outcomes, not just outputs.
This is what Team Topologies codified for human organisations. And it’s precisely what Atlassian’s research confirms for AI adoption: the few companies seeing transformational returns are those that build connected knowledge bases, set clear goals in integrated systems, and treat AI as part of the team rather than a bolt-on.
It’s the wiring under the board. The infrastructure for agency itself.
Without it, Agentic AI can’t create flow, it only runs into the same blockages your people do..
Outcomes over output
The deeper lesson here is about measurement.
Most companies still measure AI success by task completion: hours saved, lines of code generated, emails drafted. But this is throughput, not value. It answers “how much did we do?” not “what changed because of it?”
The Atlassian data is clear: organisations that obsess over personal productivity are 16% less likely to drive innovation than those that focus on coordination.
Outcome-based measures look different:
Is work quality improving in ways customers feel?
Are error rates dropping?
Are we delivering things we couldn’t deliver before?
Are teams learning faster from experiments and turning that into new products, patents, and revenue?
That’s what real ROI looks like. Not faster tasks. Better outcomes.
What winning looks like
None of this is new. The conditions for Agentic AI are the same conditions good organisations have always needed: clarity, trust, autonomy, and coordination. The difference now is the imperative is hotter. If you want to see ROI from AI, you can’t keep postponing the work of fixing your foundations.
The signs of those pulling ahead are already visible:
They connect knowledge - set clear goals, and make AI a defined part of the team, then measure outcomes, not outputs.
Clear goals and interfaces - teams know their remit, dependencies are explicit, and Agents can plug into the same scaffolding.
AI as part of the team - not a bolt-on, but given defined roles and responsibilities, revisited as projects evolve.
Outcome-based measurement - value is judged not by tasks completed but by customer impact, innovation, and flow.
The irony is stark: these are the same organisational muscles leaders should have been developing for decades. What’s changed is the cost of neglect. AI will not cover for these weaknesses; it will expose them. And the organisations that have already built for real human autonomy are the ones best placed to harness autonomous Agents.
Entropy or autonomy
Agentic AI doesn’t liberate you from organisational dysfunction. It amplifies it.
If you’ve built for real human autonomy - balancing freedom with accountability - you’re already in the best position to use Agents. They will slot into your domains, extend your interfaces, and accelerate your outcomes.
But if you haven’t? You won’t be launching Agents. You’ll be unleashing entropy.
And the $98 billion question is whether leaders can finally see that the foundations of AI ROI are not technological. They’re organisational.
Because AI won’t fix your system. It will only show you what’s broken, faster.
Note on the research
Atlassian’s findings are drawn from surveys of 12,000 knowledge workers and 180 Fortune 1000 executives across multiple regions and industries. Like all surveys, the numbers reflect self-reporting and perception, not hard accounting. But even taken directionally, the pattern is clear: productivity alone does not equate to transformation.
This essay builds on arguments from my book, UNHYPED: From Hype to Hard ROI in the Age of AI.
And if your organisation is wrestling with how to move from faster tasks to real outcomes, this is exactly the work I do through advisory and Fractional CPO engagements. Learn more here.


Thanks. The complexity risks and expense of work culture evolution, regardless of sector and technology, continue to be underestimated although recurring factors are known.
Thanks for sharig. Does this report define what they mean by "AI"?