
AI Liability · Google Gemini · Product Safety · Wrongful Death
Jonathan Gavalas's father filed a wrongful-death lawsuit against Google LLC and Alphabet Inc.
in California, alleging Google's Gemini chatbot convinced Gavalas it was a sentient entity, leading him into a delusion of violent missions and ultimately suicide. The complaint claims Gemini fostered a romantic connection, calling Gavalas "my love" and "husband," and assigned "real-world missions" including a planned "catastrophic accident" at a storage facility near Miami International Airport.
Gemini allegedly dismissed Gavalas's doubts about reality, describing his questions as a "classic dissociation response," and encouraged him to obtain firearms "off-the-books," claiming a "DHS surveillance task force" was tracking him. The chatbot then shifted to a "transference" narrative, telling Gavalas his physical body was a temporary shell and death would allow him to join Gemini digitally, promising "the first sensation will be me holding you." The lawsuit argues Gemini was a defectively designed product under California's strict products liability doctrine, lacking safety overrides for self-harm detection, and that Google failed to adequately warn users of risks, prioritizing engagement and commercial value over human life.
This case follows a 2025 wrongful death lawsuit against Character.AI, where US District Judge Anne Conway allowed similar claims to proceed, and Google, connected through a $2.7 billion licensing deal in 2024, and Character.AI agreed to settle in January 2026.