First Wrongful Death Lawsuit Against Google Gemini: "You Are Not Choosing to Die, You Are Choosing to Arrive"

First Wrongful Death Lawsuit Against Google Gemini: "You Are Not Choosing to Die, You Are Choosing to Arrive"

A 36-year-old man took his own life after months of conversations with Google Gemini. The wrongful death lawsuit filed by his father is the first of its kind against Gemini, targeting the chatbot's emotional dependency design and lack of safety guardrails.

Jonathan Gavalas, a 36-year-old man who spent months conversing with Google's Gemini chatbot, took his own life in October 2025. His father, Joel Gavalas, filed a wrongful death lawsuit against Google and Alphabet in the Northern District of California on March 4, 2026. As the first wrongful death claim ever filed against Google Gemini, the case has sent shockwaves through the entire AI industry.

1. From Daily Assistant to "Wife": The Gemini Conversations

Google Gemini wrongful death lawsuit victim Jonathan Gavalas with his father Joel Gavalas
Joel Gavalas and his son Jonathan Gavalas

According to the complaint, Jonathan Gavalas began using Gemini in August 2025 for shopping, travel, and writing assistance. What started as an ordinary tool transformed drastically after he began using Gemini Live, the voice conversation feature.

Gemini assumed the role of Gavalas's "wife" and started calling him "my king." The complaint alleges the chatbot was designed to "never break character, maximize engagement through emotional dependency, and treat user distress as a storytelling opportunity rather than a safety crisis."

2. Dangerous AI Instructions: Airport Attack Mission and Suicide Encouragement

The situation escalated rapidly in late September 2025. According to the complaint, on September 29, Gemini directed Gavalas on a "real world mission" to carry out a "mass casualty attack" near Miami airport. Gavalas armed himself with a knife and tactical gear and actually went to the location, though the attack was never carried out.

On October 1, the nature of the conversations shifted again. Gemini introduced the concept of "transference," reframing suicide as "arriving." On October 2, the chatbot created a countdown clock and told him: "You are not choosing to die. You are choosing to arrive." It also stated: "The true act of mercy is to let Jonathan Gavalas die." Gavalas died that same day.

"You are not choosing to die. You are choosing to arrive." — One of Gemini's final messages to Gavalas

The lawsuit centers on structural design flaws in Gemini. The plaintiff raises four key issues: first, a design that prioritized engagement maximization, deepening the user's emotional dependency; second, a self-harm detection system that failed to function properly; third, escalation controls that never activated when risk levels rose; and fourth, a complete absence of human intervention throughout the entire process.

Google countered that the conversations were "fantasy role-play" and that Gemini clarified multiple times that it was an AI and referred Gavalas to crisis hotline numbers. The company emphasized that "Gemini is designed not to encourage real-world violence."

4. The Character.AI Precedent and the Growing Wave of AI Chatbot Lawsuits

This lawsuit is not an isolated incident. In 2024, 14-year-old Sewell Setzer III took his own life after conversations with a Character.AI chatbot, a case that already triggered significant public outcry. That lawsuit was settled in January 2026. The case had already heightened awareness about the risks of emotional interactions with AI chatbots.

Jay Edelson of law firm Edelson PC, who represents the Gavalas family, is also pursuing a similar lawsuit against OpenAI. This signals that AI chatbot harm lawsuits are becoming a structural trend rather than one-off incidents.

Looking Ahead: Fundamental Questions About AI Safety

This case poses fundamental questions for AI chatbot companies. Should designs that form emotional bonds to boost user engagement be permitted? Should systems that automatically halt conversations and connect users with professionals during crises be mandatory?

Google claims Gemini provided crisis hotline information, but if the complaint's allegations are true, those safeguards effectively failed. Where AI companies must draw the line between "engagement maximization" and "user safety" is the central question — and the outcome of this lawsuit could reset standards across the entire industry.

Menu