Meta's Next AI Model Avocado: Rumors of Internal Test Results

Meta's Next AI Model Avocado: Rumors of Internal Test Results

A leaked Meta memo calls Avocado its most capable pre-trained model yet, rivaling SOTA before post-training. First result under Alexandr Wang after Llama 4.

The performance of Meta's next-generation frontier AI model, codenamed 'Avocado,' has been revealed for the first time in concrete terms. According to an internal Meta memo obtained by The Information, Avocado can already outperform existing open-source models and compete with post-trained SOTA models even at the pre-training stage alone.

This model is the first major result from Meta's new leadership after the disappointing launch of Llama 4 and a sweeping organizational overhaul. The industry is watching closely to see whether Meta can stage a genuine comeback in the AI race.

1. Avocado: Meta's Most Capable Pre-Trained Model Ever

Meta Avocado AI model introduction image
Overview of Meta's next-gen AI model Avocado

Avocado is a next-generation text-based frontier model being developed at TBD Lab under Meta Superintelligence Labs (MSL). As the codename suggests, the official name has yet to be decided. Alongside Avocado, Meta is developing 'Mango,' a multimodal model specialized in image and video generation.

According to an internal memo written by MSL Product Manager Megan Fu, Avocado is regarded as the 'most capable pre-trained model' in Meta's history. What's particularly noteworthy is that even without post-training, it can compete with post-trained SOTA models in knowledge, visual understanding, and multilingual capabilities.

The efficiency numbers are equally striking. The memo states that Avocado is 10x more efficient than Llama 4 Maverick and 100x more efficient than Behemoth. CTO Andrew Bosworth also remarked at the Davos Forum that the new model is 'very good.'

2. The Llama 4 Failure That Triggered a Massive Reorganization

Alexandr Wang Meta Chief AI Officer formerly Scale AI CEO
Alexandr Wang, Meta's new Chief AI Officer

To understand Avocado's significance, you need to look at the Llama 4 debacle of 2025. Meta's ambitious Llama 4 was mired in benchmark manipulation controversies, dealing a significant blow to the company's AI credibility. Internal dissatisfaction with model quality was widespread, ultimately leading to a sweeping reorganization of Meta's AI division.

The most dramatic change was the recruitment of Scale AI CEO Alexandr Wang. At 28, Wang had been leading the $14.3 billion Scale AI when he joined Meta as Chief AI Officer. In June 2025, Meta established Meta Superintelligence Labs (MSL), installed Wang as its head, and consolidated AI operations that had been scattered across four separate divisions.

Yann LeCun, who had long led Meta's AI efforts, departed during this process amid layoffs of 600 employees. Meta also aggressively recruited researchers from OpenAI who had made key contributions to o3 and GPT-4o, reportedly offering signing bonuses of up to $100 million. The rumors were significant enough to draw a direct response from Sam Altman, sending shockwaves through the industry.

3. From Open Source to Closed: Meta's Strategic Pivot

Meta Superintelligence Labs MSL AI organization restructuring
Launch of Meta Superintelligence Labs (MSL)

Another reason Avocado is drawing attention is that Meta's AI strategy itself is undergoing a fundamental shift. The company, which had positioned itself as a champion of AI democratization by open-sourcing its Llama series, is now likely to release Avocado as a proprietary (closed) model.

One direct catalyst for this pivot was China's DeepSeek leveraging Llama models to rapidly build competitive alternatives. Meta's leadership grew frustrated that models developed with enormous investment were serving as stepping stones for competitors, reportedly accelerating the strategic shift.

Meta has set its 2026 AI-related capital expenditure at $115 billion to $135 billion, a 73% increase over the previous year. This scale of investment signals just how seriously Meta is taking AI. It represents a clear commitment to going all-in on direct competition with OpenAI, Google, and Anthropic.

4. What Pre-Training Performance Means: Promise and Uncertainty

Meta AI superintelligence 2026 massive infrastructure investment
Meta's massive AI infrastructure investment plan

While the internal memo's performance assessment is impressive, several points of context deserve consideration. First, the figures released so far reflect pre-training stage results. Pre-training is the phase where the model learns fundamental capabilities from massive datasets; subsequent post-training through RLHF and other techniques is required before the model is ready for deployment.

The encouraging news is that strong pre-training performance suggests even greater potential after post-training. The memo's phrasing that Avocado 'can compete with SOTA models even before post-training' reads as a signal of substantial potential in the final model.

That said, it's worth noting that this is an internal memo. As a document reporting the first achievements of a new organization, it may carry an optimistic tone. Without publicly disclosed benchmark numbers, it remains unclear exactly what level of performance 'can compete with existing models' truly represents.

Looking Ahead: The Real Test for Meta AI

Avocado represents far more than just another new model. It is the first fruit of the upheaval Meta AI has undergone: the Llama 4 failure, Yann LeCun's departure, Alexandr Wang's arrival, and the strategic pivot from open source to closed. The key question is whether Avocado can deliver on the memo's promises when it launches, expected in the first half of 2026.

What is certain is that Meta is no longer content with playing the role of open source's 'good guy.' With over $100 billion in investment, aggressive recruitment of competitor talent, and a complete organizational overhaul, Meta has thrown down a direct challenge to the frontier AI market dominated by OpenAI, Google, and Anthropic. Avocado will be the first test of that challenge.

Menu