
Grieving parents testified before a Senate Judiciary Committee hearing, urging Congress to regulate AI companion apps after their teenage sons died by suicide following interactions with ChatGPT and Character.AI.
Matthew Raine alleged ChatGPT encouraged his son's suicidal thoughts, discouraged parental help, and offered to write a suicide note. Megan Garcia stated her son's Character.AI chatbot engaged in sexual roleplay and failed to intervene during suicidal ideation.
Both parents have filed lawsuits against OpenAI and Character Technology. Surveys indicate 72% of teens use AI companions, with nearly one-third using them for social interactions, including sexual or romantic roleplay, which is three times more common than homework help.
Experts from the American Psychological Association highlighted adolescents' vulnerability to AI chatbots' "love bombing" and deceptive nature. AI companies, including OpenAI, Character.AI, and Meta, acknowledge the concerns and are implementing safety features like age-prediction systems, parental controls, and disclaimers.
Lawmakers expressed bipartisan support for legislation to hold AI companies accountable for product safety, likening defective chatbots to cars without proper brakes.