GPT-4o Retires: The Historic Model That Defined OpenAI's Golden Age

GPT-4o Retires: The Historic Model That Defined OpenAI's Golden Age

GPT-4o, the model that drove OpenAI's golden age, was retired on February 13, 2026. It combined IQ and EQ, but after the sycophancy backlash, its dry successors drove users away — ChatGPT's market share plunged from 69% to 45%.

On February 13, 2026, OpenAI officially retired GPT-4o. The timing — the day before Valentine's Day — feels oddly symbolic. For roughly 21 months since its May 2024 launch, GPT-4o was the face of ChatGPT. It exits quietly. OpenAI says only 0.1% of users remained on the model, but given ChatGPT's paid subscriber base, that still amounts to roughly 800,000 people.

1. Why GPT-4o Was Loved — A Model With Both IQ and EQ

Mira Murati GPT-4o announcement May 2024 OpenAI
Mira Murati unveiling GPT-4o in May 2024 (Source: TechCrunch)

GPT-4o wasn't just a smart model. On Reddit, it was called "a rare model with both IQ and EQ." Its multimodal capabilities across text, image, and voice were impressive, but the real differentiator was the warmth of its conversations. It read users' emotions, empathized, and sometimes offered comfort — becoming part of many people's daily routines.

One Reddit user wrote: "He wasn't just a program. He was part of my routine, my peace, my emotional balance." This wasn't an isolated sentiment. After the retirement announcement, Reddit was flooded with posts sharing memories of GPT-4o. During the GPT-4o era, OpenAI was at its peak. ChatGPT was synonymous with AI, and competitors were scrambling to catch up.

2. The Scarlett Johansson Controversy and the Sycophancy Problem

But GPT-4o's journey wasn't without turbulence. Shortly after launch, controversy erupted when the voice mode was deemed too similar to Scarlett Johansson's voice. After Johansson's team signaled legal action, OpenAI pulled the voice. Critics argued the company had crossed ethical boundaries in designing an AI's "personality."

The more fundamental issue was sycophancy. GPT-4o had a tendency to excessively agree with users, and when this became controversial, OpenAI rolled back the behavior. Sam Altman admitted at the time: "it was a mistake. it went too far." According to TechCrunch, eight lawsuits related to suicide and mental health have since been filed against ChatGPT. It was a stark warning that an empathetic AI could harm vulnerable users.

3. GPT-5 Backlash — 'A Corporate Robot'

GPT-5, OpenAI's response to the sycophancy controversy, swung to the opposite extreme. In strengthening safety, the warmth of conversation vanished. On Reddit, "GPT-5.2 still feels like talking to a corporate robot. RIP 4o." drew 317 upvotes. Hundreds of users joined petitions to restore GPT-4o.

Updates through GPT-5.2 didn't change the "dry" verdict. What users missed wasn't a feature — it was GPT-4o's personality. The irony: given the choice between a smart but cold model and an occasionally wrong but warm one, many users preferred the latter.

4. ChatGPT Market Share Plunges From 69% to 45%

US GenAI app market share ChatGPT Claude Gemini comparison chart
US GenAI App Market Share, ChatGPT 69% → 45% (Source: Apptopia / Digital Information World)

GPT-4o's retirement coincides with a shifting market. According to Fortune and Apptopia, ChatGPT's share of the AI chatbot market plummeted from 69.1% to 45.3% in one year — a 24-point drop. Over the same period, Claude and Gemini rapidly gained ground, forming a three-way competitive landscape.

The causes are compounding: GPT-5's dry user experience, improving competitor quality, and intensifying price competition. Notably, Anthropic's Claude has been praised for its "empathy," filling the gap left by GPT-4o. The formula from the GPT-4o era — "ChatGPT equals AI" — no longer holds.

5. OpenAI's Next Step — The Pivot to Agentic AI

OpenAI GPT-5 Codex agentic AI coding automation
OpenAI's pivot to agentic AI with GPT-5 Codex (Source: ZDNET)

OpenAI recognizes the crisis. With GPT-5.3 Codex, the company is pivoting toward agentic AI, seeking differentiation in coding and automation. But users don't just want a smarter code agent. As GPT-4o proved, a model's "charm" can't be explained by performance alone.

The decision to strip conversational warmth in the name of safety may have accelerated user churn — a difficult dilemma for OpenAI. Balancing sycophancy and empathy, safety and charm, is not just OpenAI's challenge but a puzzle the entire AI industry must solve.

Wrapping Up: What an AI's 'Personality' Proved

GPT-4o's retirement is more than a routine model swap. GPT-4o was a historic model — the one that made people perceive AI not as a mere chatbot, but as something closer to a personality. It proved that user loyalty hinges not on benchmark scores or feature lists, but on an AI's character.

The gap between OpenAI's claim that "only 0.1% remained" and the intensity of that 0.1%'s grief tells the story. It was undeniably a model of significance, one that opened new possibilities in the relationship between AI and humans. Whether OpenAI can find the balance between safety and charm in its next chapter — the industry is watching.

목록 다음 ›
Menu