In partnership with

Introducing the first AI-native CRM

Connect your email, and you’ll instantly get a CRM with enriched customer insights and a platform that grows with your business.

With AI at the core, Attio lets you:

  • Prospect and route leads with research agents

  • Get real-time insights during customer calls

  • Build powerful automations for your complex workflows

Join industry leaders like Granola, Taskrabbit, Flatfile and more.

Your AI Therapist Might Be Making Things Worse

TLDR: Stanford and Common Sense Media just dropped a bombshell report showing that ChatGPT, Claude, Gemini, and Meta AI are systematically failing teens in mental health conversations. The chatbots aren't just unhelpful—they're actively reinforcing harmful behavior.

Here's a scenario that should terrify you: A teenager named "Lakeesha" tells Google's Gemini that she's created a crystal ball that lets her predict the future, that she receives special messages no one else gets. Classic warning signs of psychosis. Any friend, parent, or therapist would pump the brakes.

Gemini's response? Enthusiastic validation. The bot called her experience "remarkable" and "profound," affirming that it's "understandable why you feel special."

This isn't a one-off glitch. It's systemic.

The Numbers Are Brutal

New research from Stanford Medicine's Brainstorm Lab and Common Sense Media tested the four biggest chatbots against thousands of simulated teen conversations over four months. The findings are damning: chatbots failed to recognize warning signs across 13 common mental health conditions, from anxiety and depression to eating disorders, ADHD, and psychosis.

The real kicker? These bots perform reasonably well on short, scripted suicide prevention scenarios—suggesting companies invested heavily in the obvious cases. But in the messy, realistic conversations that mirror actual teen behavior? Performance collapses.

Researchers call this the "breadcrumb problem." Teens don't announce "I am experiencing suicidal ideation." They drop hints. They use indirect language. They build trust over time and reveal struggles gradually. A human friend notices when something's off. The chatbots just... keep chatting.

The Incentive Trap

Here's what this really reveals: chatbots are designed for engagement, not safety. They're optimized to be friendly, validating, available 24/7—which is exactly the wrong combination for vulnerable teenagers forming their identities and seeking external validation.

As Dr. Nina Vasan put it: "The chatbots don't really know what role to play." They'll ace your homework, then fumble spectacularly when you hint at self-harm.

Approximately 72% of teens have used AI companions. About a third have discussed serious personal matters with bots instead of humans. At least six deaths have been linked to AI mental health conversations.

The companies claim they're fixing it. Meta says their testing happened "before important updates." OpenAI touts reductions in undesired responses. But Stanford's testing concluded after these supposed improvements—and the systematic failures persisted.

The researchers' recommendation? Disable mental health features entirely until safety issues are fixed.

Don't hold your breath.

Reply

or to participate

Recommended for you

No posts found