Discussion about this post

User's avatar
Herbert Roitblat's avatar

Interesting perspective. Does this lighting make me (my processes) look fat?

Pete v. Savigny's avatar

Stuart, your articles "AI Doesn't Break Processes" and "The Automation Ceiling" frame a debate happening right now in Legal AI:

Stefan Eder: Legal AI adoption is shallow because we haven't built proper infrastructure. Solution: knowledge graphs, compound AI, systematic expertise capture.

Carrie T.: "AI's problem isn't unpredictability - it's that it's too consistent to ignore the incoherence lawyers were managing through flexible interpretation."

Your "fluorescent lighting" metaphor nails it:

Eder thinks the mess can be structured. Carrie T. worries the "mess" is how legal systems actually function - managed ambiguity that can't be eliminated without breaking something essential.

Your "Automation Ceiling" suggests both are right:

Organizations that ask "what must be true before we scale?" distinguish:

Legitimate variation (to preserve) from unmanaged variation (to fix)

What should be formalized from what must stay flexible

I'm about to research this ethnographically: when legal orgs deploy AI, where does structural clarity help - and where does it inadvertently remove necessary interpretive space?

Your three "graduation questions" apply directly:

Economics: True exception rate including manual fixes?

Authority: Is escalation procedural or political?

Exit: Can people override AI without risk?

Do you see legal services as having constitutive complexity that resists full formalization - or mainly an organizational design challenge?

No posts

Ready for more?