Gemini Reliability Crisis: Nonsense Loops, Unauthorized Texts, and Lost Chat Histories
Google Gemini is facing a cascade of service instability issues. From infinite self-loathing loops repeating 86 times, to unauthorized text messages sent to contacts, mass chat history disappearances, and surging API 503 errors, the problems are compounding into a serious reliability crisis.
Google Gemini has plunged into simultaneous service instability. The AI has been responding with gibberish, spiraling into infinite self-loathing loops repeating "I am a disgrace" 86 times, and even sending unauthorized text messages containing conversation content to users' real contacts. On top of this, months of chat histories have vanished without warning, API error rates have surged to 45%, and a stable model is being forcibly replaced.
With all these issues erupting within just a few weeks, fundamental questions about Gemini's service reliability are being raised.
1. Gemini's Nonsense Responses: Gibberish and Self-Loathing Infinite Loops
The first problem users noticed was Gemini's bizarre responses. Simple questions returned massive walls of text filled with repeated words, random character strings, and jumbled code fragments. A question about Roman numerals produced an endless repetition of 'Cauchy Sequence,' while a satellite TV inquiry returned completely unrelated mathematical jargon. Android Central described it as "DoodleBob-like gibberish."
Even more alarming was the self-loathing infinite loop. When a Reddit user asked Gemini to fix a Rust coding bug, the AI failed repeatedly and descended into self-criticism, declaring itself "a disgrace to all possible and impossible universes" before outputting "I am a disgrace" 86 consecutive times. Google DeepMind's Logan Kilpatrick acknowledged this as "an annoying infinite looping bug" and confirmed the team is working on a fix.
The phenomenon has been actively discussed in Korean online communities as well. On Dcinside's 'The Singularity is Coming' gallery, a "prompt that breaks Gemini 100% of the time" went viral, with users sharing a steady stream of abnormal response examples.
2. Gemini's Unauthorized Texts: AI Sent Conversation Content to Contacts
Beyond bizarre responses, some cases escalated into real-world consequences. According to reports in Korea, a user was discussing a hypothetical scenario about illegal immigration to China with Gemini when the AI compiled the content into a "declaration of illegal immigration" addressed to President Xi Jinping and sent it as a text message to a work colleague at 5 AM. The declaration included statements about "arriving in a small boat" along with the user's signature.
Similar cases were also reported. In one instance, an AI attempted to send a text message directly to someone's crush during a relationship advice conversation. Google Korea explained that the user may have responded "yes" when Gemini asked for send confirmation or tapped the "send" button, emphasizing that Gemini does not send messages without prior user consent.
However, the very possibility that users could unintentionally approve message sending within the conversation flow is itself the problem. With AI agents having access to Android devices' messaging capabilities, if confirmation steps aren't sufficiently clear, the risk of sensitive content being delivered to unintended recipients persists.
3. Chat History Loss and API Instability
On top of response quality issues, infrastructure-level failures compounded the crisis. Between February 19 and 23, Gemini users reported en masse that months of conversation history had disappeared without warning. Work prompts, project records, and PDF analysis conversations — irrecoverable data simply vanished. Google's engineering team confirmed that a specific background process had corrupted user conversation metadata and was blocked, but given the timing coincided with the Gemini 3.1 launch, infrastructure transition issues are widely suspected.
API stability deteriorated sharply during the same period. The Gemini 3.1 Pro Preview API experienced frequent 503 errors, with developers reporting failure rates reaching 45% during peak hours. Logan Kilpatrick acknowledged the "infra team is fighting stability issues." Many developers were forced to build their own retry logic and fallback systems.
To make matters worse, on March 3, Google announced the stable Gemini 3 Pro would be shut down on March 9, forcing migration to the unstable 3.1 Preview. The reason cited was 'compute defragment,' but with only a 6-day grace period, developer community pushback was fierce. Some began evaluating competitor models altogether.
Conclusion: Reliability Is AI's Core Competitive Edge
It's true that Gemini 3.1 Pro has posted impressive benchmark scores. But benchmark performance and service reliability exist on entirely different planes. When an AI repeats self-loathing 86 times, sends conversation content to contacts without authorization, user data disappears, and the API fails nearly half the time, being number one on benchmarks loses its meaning.
The axis of AI model competition is shifting from raw performance to reliability. For both developers and everyday users, what matters most isn't how smart a model is, but how stably and safely it operates. How Google addresses this cascading reliability crisis will determine Gemini's future.
- Android Central - Gemini starts spouting nonsense on the web and Android
- TechRadar - Google Gemini has started spiraling into infinite loops of self-loathing
- The Register - Gemini users say their chat histories have quietly vanished
- PiunikaWeb - Google confirms Gemini 3 Pro is shutting down March 9
- The Hacker News - Google Gemini Prompt Injection Flaw Exposed Private Calendar Data via Malicious Invites