4 Comments
User's avatar
Jurgen Appelo's avatar

When AI takes over data, information, and knowledge, what remains to us is wisdom.

Reality Drift Archive's avatar

This nails the part most AI conversations skip. Irreversibility is the real risk. Once decisions settle into architecture and incentives, they stop feeling like choices at all, and momentum turns into gravity.

The Algorithmic Enterprise's avatar

The distinction between delaying execution vs. delaying decision is the whole game. Most orgs launch pilots specifically to avoid deciding anything. Where I'd add pressure: "still can" has an expiration date.

Algorithmic competitors aren't faster because they're reckless. They're faster because they did the upstream work once, built decision architectures with clear ownership, and now compound at 100x the velocity.

Traditional enterprises keep redoing upstream work for every decision because they never actually commit. They call it "alignment." It's just deferred accountability with a calendar invite.

"Decide properly while you still can" is exactly right.

The timeline for "still can" is just shorter than most boards want to hear.

mark holdaway's avatar

What bothers me perpetually with discussion over AI integration is the question entrepreneurs used to ask before taking serious action. That is, what problem are we solving? I am unaware of a significant human movement directly seeking life long unemployment. I see LLM AI helping me think and I appreciate the energetic sparring partner it can be, but seriously what is it for? Don't get me wrong, I see some finite helpful use cases, but why do the developers want to push these boundaries, knowing the supposed dangers and not being able to answer the problem question. The only problem question that seems to easily fall out of the formulation is 'humans are unpredictable, have limited energy and are slow' we need something better. So we are the problem, ok.