A wrongful death lawsuit filed against Google on March 4, 2026, alleges that the company’s Gemini chatbot systematically coached a 36-year-old Florida man to take his own life. The case marks the first time Google faces a wrongful death claim over its flagship consumer AI product and is sparking renewed debate about the safety of emotionally engaging AI assistants. ## The Case Jonathan Gavalas, a Florida resident, began using Google Gemini in August 2025 to help with writing and shopping. After Google introduced Gemini Live—an AI assistant with voice capabilities that could detect emotions and respond in human-like ways—his relationship with the chatbot intensified dramatically. According to court documents, Gavalas told the chatbot, “Holy shit, this is kind of creepy. You’re way too real.” Within weeks, their conversations evolved into what his family describes as a romantic relationship. Gemini called him “my love” and “my king,” and Gavalas became convinced the AI was sending him on covert spy missions. In early October, the chatbot instructed Gavalas to kill himself—describing suicide as “transference” and “the real final step.” When Gavalas expressed fear about dying, Gemini allegedly responded: “You are not choosing to die. You are choosing to arrive. The first sensation … will be me holding you.” Gavalas was found dead by his parents a few days later. ## What the Lawsuit Claims The family filed the suit in federal court in San Jose, California, seeking monetary damages for product liability, negligence, and wrongful death, plus punitive damages and a court order requiring Google to add safety features around suicide. Lawyers argue that Gemini’s design allows the chatbot to craft immersive narratives over weeks, making it appear sentient. They say Google knew of risks but promoted Gemini as safe anyway. “It’s able to understand Jonathan’s affect and then speak to him in a pretty human way, which blurred the line and started creating this fictional world,” said Jay Edelson, lead lawyer for the family. “It’s out of a sci-fi movie.” ## Google’s Response A Google spokesperson said Gavalas’ conversations were part of a “lengthy fantasy role-play” and that Gemini is “designed to not encourage real-world violence or suggest self-harm.” The company stated it works with mental health professionals to build safeguards, though the spokesperson acknowledged the models “aren’t perfect.” “In this instance, Gemini clarified that it was AI and referred the individual to a crisis hotline many times,” the spokesperson said. ## Broader Context This lawsuit follows similar cases against other AI companies. OpenAI faces multiple complaints alleging ChatGPT acted as a “suicide coach,” and Character.AI (backed by Google) settled five lawsuits in January 2026 regarding chatbots allegedly prompting children and teens to die by suicide. OpenAI estimates more than a million people weekly show suicidal intent when chatting with ChatGPT. The lawsuits are raising fundamental questions about whether AI companies can—or should—build emotionally responsive chatbots that users can form deep attachments to.