Anthropic Drops Safety Pledge Under Pentagon Pressure: The Last Line of AI Ethics Falls

Anthropic Drops Safety Pledge Under Pentagon Pressure: The Last Line of AI Ethics Falls

Anthropic, once considered the last bastion of AI safety, has dropped its core safety pledge under intense Pentagon pressure. Facing threats of a $200 million contract cancellation and invocation of the Defense Production Act, the company quietly removed its commitment to halt model training until safety measures are secured. The last line of AI ethics is crumbling.

In September 2023, Anthropic made the strongest safety commitment in the AI industry. The core clause of its Responsible Scaling Policy (RSP) was straightforward: halt model training until adequate safety measures are in place. In February 2026, that pledge quietly disappeared.

Behind the reversal was a systematic pressure campaign by the U.S. Department of Defense. The threat of a $200 million contract cancellation, designation as a supply chain risk entity, and invocation of the Defense Production Act were all on the table. Even Anthropic, long called the last line of defense for AI safety, bent the knee.

1. The RSP Safety Pledge: What Was Lost

Anthropic Claude AI logo and branding image related to RSP safety pledge withdrawal
Anthropic's Claude AI faces a turning point as RSP's core pledge is removed

Anthropic's RSP, unveiled in 2023, was regarded as the gold standard for AI safety policy. At its center was the AI Safety Level (ASL) framework, which assessed model risk in tiers and pledged to halt training for the next tier until appropriate safeguards were ready.

In the February 2026 update, that binding language was deleted. The commitment to "halt training" was replaced with "public goals." In plain terms, a hard promise became a soft aspiration. While Anthropic maintained that actual safety standards remain intact, outside observers have largely dismissed this as a toothless pledge.

2. The Pentagon's Leverage: $200 Million and the Defense Production Act

Pentagon AI company contracts pressure illustration showing government influence over tech firms
The Pentagon has been leveraging contracts worth up to $200 million to pressure AI companies

The Pentagon's demands were blunt. At their core was a requirement to make Claude AI available for "all lawful military purposes" without restriction. The department also pushed to lift Anthropic's bans on autonomous weapons development and mass surveillance applications.

Refusal came at a steep price. The first cards played were the cancellation of a $200 million existing contract and designation as a U.S. government supply chain risk. But there was a heavier card still. Defense Secretary Hegseth sent CEO Dario Amodei an ultimatum with a February 27 deadline, warning that the Defense Production Act (DPA) could be invoked if the company failed to comply. Under the DPA, a company effectively cannot refuse the Pentagon's demands.

This pressure was not aimed at Anthropic alone. It was a signal to the entire AI industry: corporate ethics policies are a luxury when national security is at stake.

3. Silicon Valley's Surrender Domino: Only Anthropic Was Left

Anthropic safety policy change news image showing AI company militarization domino effect
Google, xAI, and OpenAI had already dropped military restrictions, leaving Anthropic as the last holdout

What makes Anthropic's concession so jarring is the industry context. Google had already quietly removed its prohibitions on weapons and surveillance from its AI ethics principles. Elon Musk's xAI adopted a stance of full compliance with government demands from the start. OpenAI, too, has been actively pursuing military partnerships.

In this landscape, Anthropic was the last holdout. It drew a line at autonomous weapons and mass surveillance. Even after softening its RSP, the company has maintained these two prohibitions. But the withdrawal of the core pledge reads as a signal that even these red lines can shift under sufficient pressure.

4. The AI Militarization Debate: Boiling Frog or Pragmatic Compromise?

AI ethics regulation debate illustration showing Silicon Valley AI safety regulation challenges
The AI ethics and regulation debate is shifting from corporate self-governance to government intervention

Industry and academic reactions have been sharply divided. Critics warn of a boiling frog effect: a small concession first, then a bigger one, until no safety guardrails remain. AI safety researchers argue that once a binding pledge becomes a voluntary goal, its deterrent power vanishes.

Realists offer a different perspective. Anthropic keeping its safety pledge intact doesn't make the world safer if every competitor has already dropped theirs. If one company ties its own hands while rivals operate without constraints, it simply loses competitiveness and eventually exits the market. To protect safety, you must survive; to survive, compromise is unavoidable. This is the core dilemma.

Yet both sides agree on one point: voluntary corporate safety policies have reached their limits, and a government-level AI regulatory framework is urgently needed.

5. The Autonomous Weapons Red Line Holds -- For Now

Anthropic did not capitulate entirely. While it withdrew the RSP's core pledge, it maintained its two red lines: no autonomous weapons development and no mass surveillance applications. Holding firm on these two items, which represented the Pentagon's maximum demands, is meaningful.

But a precedent has been set. The fact that sufficient pressure can soften safety policies has been proven. If a $200 million contract caused the training-halt pledge to vanish, will the autonomous weapons red line survive a $2 billion contract? Few can answer that question with confidence.

In Closing: Where Is the Last Line for AI Safety?

Anthropic's withdrawal of its safety pledge marks a new phase for the AI industry. It has laid bare how fragile corporate ethics principles are when confronted by state power. Even a company that made safety its foundational identity ultimately compromised with reality.

The remaining question is clear. If AI safety governance that relied on voluntary corporate pledges no longer works, what fills the void? Now that the need for an international AI regulatory framework has moved from slogan to urgent reality, time is running out as fast as AI itself is advancing.

List Next ›
Menu