Discussion about this post

User's avatar
Binyamin Klempner's avatar

Scott, I've been circling this exact problem for months, and you've named it with more precision than I've managed to articulate myself.

The "perfect mimic problem" you describe — the possibility that we build an AI capable of perfectly reproducing therapeutic behavior while still fundamentally failing the deeper purpose — is the conceptual trap I see most of the field walking straight into. And you're right that the economic asymmetry makes it worse: therapeutic implication without therapeutic accountability, clinical language without clinical constraints, engagement metrics that reward exactly the wrong outcomes.

What strikes me most is the scaffolding distinction. Not scaffolding as vague "support," but scaffolding as a temporary structure designed to disappear once the underlying capacity can stand without it. That framing changes everything about what success should even mean. A system genuinely aligned with recovery should reduce dependency over time, not optimize indefinitely for return behavior.

We built InsightBridge around this tension. The AI conducts structured reflection sessions and tracks trajectory across time — sentiment, language strength, directional change in how members relate to specific topics. But it operates within a frame that a human Insight Guide designed and continues to hold. The practitioner isn't reviewing outputs for errors. They're the architect of the clinical logic itself, with veto authority over all safety decisions. The AI doesn't try to be the therapeutic relationship. It tries to create continuity so the human relationship can actually function across fragmented real-world conditions.

The hardest part is that designing this way collides with every dominant commercial incentive in tech. A member who improves and needs us less is a clinical success and a retention failure. Subscription economics reward chronic engagement. But if we're not willing to design toward autonomy rather than attachment, we're not building psychological infrastructure — we're building emotional dependency systems and calling them mental health tools.

Your point about Weizenbaum and ELIZA lands hard. We haven't solved that problem. We've scaled it globally, optimized it through reinforcement learning, and wrapped it in venture funding. The question isn't whether AI can sound therapeutic. It's whether it can help human beings remain capable of fully human life — which may require the system to eventually get out of the way.

Thank you for writing this. It's the conversation the field needs to have before the commercial logic becomes too entrenched to reverse.

2 more comments...

No posts

Ready for more?