Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

WA lawmakers look to protect minors from AI chatbots

An exterior shot of the Washington State Capitol Building on a sunny day.
Ted S. Warren
/
AP
The Washington State Capitol building in Olympia.

Washington state lawmakers are considering stronger regulations on artificial intelligence companion chatbots. This comes amid growing concern over the technology’s impact on young peoples’ mental health.

Senate Bill 5984 and its companion House Bill 2225 would require that chatbots remind the user that they are not talking to a real person every three hours; prohibit minors from being shown explicit content; and require suicidal ideation detection and prevention protocols. The legislation would also prohibit “emotionally manipulative engagement techniques,” such as showering the user with excessive praise or simulating feelings of emotional distress to keep a user engaged.

State Sen. Lisa Wellman, a Bellevue Democrat and the sponsor of the Senate bill, said she’s been alarmed by recent lawsuits and news reports about people who died by suicide after extended interactions with chatbots. In some cases, chat transcripts seem to show the chatbots failing to discourage — and even affirming — users’ expressions of suicidal ideation.

“I have not seen what I would call responsible oversight in products that are being put out on the market,” Wellman said.

Washington Gov. Bob Ferguson listed the chatbot bill as one of his top priorities this year.

“He’s read the media reports about teenage suicide, the role of AI and companion chatbots,” said Beau Perschbacher, the governor’s senior policy advisor, during a Wednesday House committee meeting. “When we’re discussing AI, he references his own kids and the challenges of parents today trying to keep up with rapidly evolving technology.”

A recent study by the nonprofit Common Sense Media found that about one in three teens have used AI companions for social interaction and relationships, “including role-playing romantic interactions, emotional support, friendship or conversation practice.”

“We’re seeing a new set of manipulative designs emerge to keep teens talking with AI companions about highly personal topics,” said Katie Davis, co-director of the University of Washington’s Center for Digital Youth, during Wednesday’s committee meeting.

The Washington chatbot bill is similar to one passed in California last year. And at least a dozen other states are also looking into chatbot regulations.

The proposed regulations have received pushback from the tech industry.

During Wednesday’s meeting, Amy Harris, director of government affairs for the Washington Technology Industry Association, argued that the Washington bill imposes “sweeping liability on companies for human behavior they do not control and outcomes they very simply cannot predict.”

“The risk is legislating based on rare, horrific outliers rather than the real structure of the technology, or the deeply complex human factors that drive suicide,” Harris said.

The bill would apply to chatbots like ChatGPT, Google Gemini and Character.ai. Last week Character.ai agreed to settle a lawsuit filed by the family of a 14-year-old boy who, according to legal filings, developed an intense emotional relationship with the company’s chatbot and died by a self-inflicted gunshot wound to the head shortly after the chatbot told him to “please come home to me as soon as possible.”

Deniz Demir, Head of Safety Engineering at Character.ai, said in a statement that the company is reviewing the proposed Washington legislation and welcomes working with lawmakers as they develop regulations.

“Our highest priority is the safety and well-being of our users, including younger audiences,” the Demir said, adding that the company recently removed the ability for users under 18 in the U.S. to engage in open-ended chats with AI on its platform.

If passed, the proposed Washington chatbot law would go into effect Jan. 1, 2027. Violations of the law would be enforced under Washington’s Consumer Protection Act, allowing individuals to sue if they believe companies have violated it.

Washington lawmakers are looking at several other new AI regulations this year. House Bill 1170 would require AI companies to attach some sort of disclosure to AI-generated images, videos and audio. House Bill 2157 aims to regulate “high-risk” AI systems and require companies to protect people from algorithmic discrimination. Senate Bill 5956 aims to limit the use of AI for surveillance and discipline in public schools. Those bills, too, have been met with resistance from the tech industry.

Wellman said that, in light of federal inaction, it's important for state governments to step up to place guardrails on the technology. She added that she’s glad the recent U.S. House proposal to place a 10-year moratorium on states passing AI regulations failed to advance.

“As [AI] gets more and more sophisticated and gets into more and more different markets and businesses, it’s going to require constant eyes on it,” Wellman said.

If you or someone you know is contemplating suicide, call for help now. The National Suicide Prevention Lifeline is a free service answered by trained staff. The number is: 1-800-273-8255.

All stories produced by Murrow Local News fellows can be republished by other organizations for free under a Creative Commons license. Image rights may vary. Contact editor@knkx.org for image use requests.

Nate Sanford is a reporter for KNKX and Cascade PBS. A Murrow News fellow, he covers policy and political power dynamics with an emphasis on the issues facing young adults in Washington. Get in touch at nsanford@knkx.org.