Newsom signs first US law requiring AI chatbot safety measures after teen suicides
San Francisco, California - California Governor Gavin Newsom on Monday signed into law a first-of-its-kind law regulating artificial intelligence chatbots, defying a push from the White House to leave such technology unchecked.

"We've seen some truly horrific and tragic examples of young people harmed by unregulated tech, and we won't stand by while companies continue without necessary limits and accountability," Newson said after signing the bill into law.
The landmark law requires chatbot operators to implement "critical" safeguards regarding interactions with AI chatbots and provides an avenue for people to file lawsuits if failures to do so lead to tragedies, according to state senator Steve Padilla, a Democrat who sponsored the bill.
The law comes after revelations of suicides involving teens who used chatbots prior to taking their lives.
"The Tech Industry is incentivized to capture young people's attention and hold it at the expense of their real world relationships," Padilla said prior to the bill being voted on in the state senate.
Padilla referred to recent teen suicides, including that of the 14-year-old son of Florida mother Megan Garcia.
Megan Garcia's son, Sewell, had fallen in love with a Game of Thrones-inspired chatbot on Character.AI, a platform that allows users – many of them young people – to interact with beloved characters as friends or lovers.
When Sewell struggled with suicidal thoughts, the chatbot urged him to "come home."
Seconds later, Sewell shot himself with his father's handgun, according to the lawsuit Garcia filed against Character.AI.
California enacts new AI law in reaction to teen suicides
"Today, California has ensured that a companion chatbot will not be able to speak to a child or vulnerable individual about suicide, nor will a chatbot be able to help a person to plan his or her own suicide," Garcia said of the new law.
"Finally, there is a law that requires companies to protect their users who express suicidal ideations to chatbots."
National rules aimed at curbing AI risks do not exist in the US, with the White House seeking to block individual states from creating their own.
If you or someone you know needs help, please contact the 24-hour National Suicide Prevention Hotline by calling or texting 988 for free and confidential support. You can also text "HOME" to 741741 anytime for the Crisis Text Line and access to live, trained crisis counselors.
Cover photo: Patrick T. FALLON / AFP