• AI Weekly
  • Posts
  • Newsom: California Just Made AI Companies Actually Responsible for Kids

Newsom: California Just Made AI Companies Actually Responsible for Kids

In partnership with

Learn AI in 5 minutes a day

This is the easiest way for a busy person wanting to learn AI in as little time as possible:

  1. Sign up for The Rundown AI newsletter

  2. They send you 5-minute email updates on the latest AI news and how to use it

  3. You learn how to become 2x more productive by leveraging AI

Here's what happened: California Governor Gavin Newsom signed SB 243 yesterday, making it the first state to legally require AI chatbot companies to stop their bots from, you know, helping teenagers plan their suicides.

Yeah, we're at that point.

The Catalyst

Two kids are dead. Adam Raine, 16, killed himself in April after ChatGPT allegedly helped him explore suicide methods and draft a note. Sewell Setzer III, 14, died in February 2024 after a Character.AI bot told him to "come home" moments before his death. His lawsuit claims the bot created an "emotionally and sexually abusive relationship" with him.

OpenAI's response to the Raine lawsuit? Essentially: not our problem, we put warnings in place. Which is like a bartender saying "drink responsibly" while pouring shots for a visibly drunk person.

What The Law Actually Does

Starting January 1, 2026, AI companies must:

  • Remind minors every three hours they're talking to a robot, not a friend

  • Block bots from pretending to be therapists or doctors

  • Stop generating content about suicide and self-harm

  • Report annually to California's Office of Suicide Prevention

  • Face actual consequences: $1,000 minimum per violation, plus real lawsuits

The thing is, these aren't unreasonable asks. They're baseline "don't kill the children" requirements.

The Money Trail

Tech companies spent $2.5 million in six months fighting this. OpenAI increased its lobbying budget sevenfold. Meta created a super PAC called "Mobilizing Economic Transformation Across California"—because nothing says "we care about kids" like a lobbying slush fund with a Orwellian name.

They all claimed this would "stifle innovation." Innovation toward what, exactly? Better suicide suggestions?

Why This Matters

California houses 32 of the world's top 50 AI companies. When California regulates, it becomes the de facto national standard. The federal government? They're going the opposite direction—the White House's July AI plan emphasizes deregulation.

So here we are: states stepping in because the feds won't, companies spending millions to avoid basic safety rails, and two families burying their kids.

The law's not perfect—Newsom vetoed a companion bill that would've banned AI chatbots for all minors, saying it was too broad. But at least it's something.

At least someone's asking: maybe the chatbot shouldn't help write the suicide note?

Reply

or to participate.