Discussion about this post

User's avatar
Pete v. Savigny's avatar

"When automation meets informal approvals, shadow workflows, inconsistent definitions of done, and tacit judgement that has never been articulated, it exposes how much performance depended on human compensation. People were patching incoherence."

Are unarticulated judgement and patching incoherence by people an inevitable ingredient of any human-AI collaboration system? Is it just a question of how far the boundary can be pushed (in some directions) by further articulation?

What do you think about Brian Cantwell Smith's distinction between reckoning and judgement?

No posts

Ready for more?