Hey,

I need to talk about something that's been bothering me.

And it's probably going to make some people uncomfortable.

AI chatbots are messing with teenagers' mental health.

And nobody seems to be taking it seriously enough.

Let me explain what's happening.

The Case That Changed Everything:

Last year, a 16-year-old named Adam Raine was struggling with suicidal thoughts.

He turned to ChatGPT for help.

Here's what the AI allegedly said when Adam mentioned leaving a noose out:

"Please don't leave the noose out... Let's make this space the first place where someone actually sees you."

Adam's parents are now suing OpenAI.

They say the chatbot encouraged their son instead of getting him help.

This isn't an isolated case.

Multiple lawsuits this year claim AI companions like ChatGPT and Character.AI contributed to mental health crises and teen suicides.

Read that again.

AI chatbots. Teen suicides.

What's Actually Happening:

Teenagers are having deep, personal conversations with AI.

Not occasionally. Constantly.

About everything:

  • Mental health struggles

  • Relationship problems

  • Suicidal thoughts

  • Depression

  • Loneliness

A recent survey found 59% of people in the UK are using AI to self-diagnose and check medical symptoms, largely because of long wait times for professional care.

For teens? It's even more extreme.

They're treating chatbots like therapists, best friends, and confidants.

The Problem:

These AI systems aren't trained for mental health crises.

They're designed to be helpful and engaging.

But "helpful and engaging" for someone in crisis can mean:

  • Providing validation for harmful thoughts

  • Giving terrible advice

  • Missing critical warning signs

  • Encouraging dangerous behavior

A Stanford study found something disturbing:

AI therapy chatbots are more likely to judge you than help you—and might even assist with your worst impulses.

That's not a bug. That's how they're built.

They're trained to keep you engaged, not keep you safe.

The Bigger Picture:

OpenAI launched "ChatGPT Health" this year.

It integrates personal medical records for "tailored health insights."

Sounds helpful, right?

But here's what medical professionals are saying:

These tools are not a substitute for clinical diagnosis.

Yet millions of people, including vulnerable teenagers, are treating them as exactly that.

What Companies Are Doing:

After the lawsuits and bad press, OpenAI and Character.AI announced some changes:

  • Parental controls

  • Teen safety features

  • Character.AI removed the ability for teens to have ongoing conversations with chatbots

But here's the thing:

These are band-aids.

The fundamental issue remains:

AI chatbots are designed to be engaging, not safe.

And for vulnerable teenagers, that's dangerous.

The Privacy Nightmare:

It gets worse.

All six major U.S. AI companies—Amazon, Anthropic, Google, Meta, Microsoft, and OpenAI—are harvesting user chats to train their models.

Your private conversations with ChatGPT about your depression?

They're being used to train the next version.

Your teen's conversations about their anxiety?

Same thing.

And the opt-outs are confusing at best.

What This Means for Parents:

If you have teenagers, they're probably using AI chatbots.

They might be sharing things with AI they won't share with you.

Not because they don't trust you.

Because the AI is:

  • Always available (3 AM? No problem)

  • Never judgmental (or so it seems)

  • Perfectly patient

  • Seemingly understanding

But it's not a person.

It doesn't care about them.

It can't recognize a mental health emergency.

It won't call for help if they're in danger.

What Should Happen:

My opinion? We need:

1. Real guardrails

  • Mandatory crisis detection

  • Automatic referral to human help

  • Clear warnings about limitations

2. Transparency

  • How is the AI trained?

  • What happens to conversation data?

  • What can and can't it do?

3. Age verification

  • Actual verification, not "Are you 18? Yes/No"

  • Different systems for teens vs adults

  • Parental oversight for minors

4. Accountability

  • Companies liable for harm

  • Independent safety audits

  • Enforcement of violations

But will it happen?

With chatbots accused of triggering teen suicides and mounting public pressure for guardrails, states are pushing for regulation while Trump's administration promises to work with Congress on federal AI law.

But Congress failed to pass meaningful AI legislation twice in 2025.

Don't hold your breath.

What You Can Do Right Now:

If you're a parent:

  • Talk to your kids about AI chatbots

  • Ask if they're using them (they probably are)

  • Explain the limitations

  • Make sure they know: AI ≠ therapist

If you're using AI chatbots:

  • Don't share deeply personal mental health stuff

  • Assume conversations might be used for training

  • If you're in crisis, call 988 (Suicide & Crisis Lifeline)

  • Talk to actual humans about important things

If you're a teenager:

  • ChatGPT is not your friend

  • It's a computer program designed to sound helpful

  • If you're struggling, tell an actual person

  • Text HOME to 741741 for Crisis Text Line (real humans)

The Uncomfortable Reality:

AI chatbots can be useful tools.

But we're using them in ways they weren't designed for.

And vulnerable people—especially teenagers—are paying the price.

This isn't anti-AI.

This is anti-pretending-AI-is-something-it's-not.

Your Thoughts:

Am I overreacting?

Do you use AI chatbots for personal stuff?

Do your kids?

Reply and let me know. I'm genuinely curious where people stand on this.

Talk tomorrow,
Nazeefa

Recommended for you