Pentagon's Ultimatum to Anthropic: Can Amodei Hold the Line on His Principles?
Defense Secretary Hegseth summoned Anthropic CEO Amodei to the Pentagon and gave a Friday deadline to allow unrestricted military AI use. That same week, Anthropic removed its development pause pledge from RSP v3.0. Between principles and reality, the deadline approaches.
On Sunday, February 23, U.S. Defense Secretary Pete Hegseth summoned Anthropic CEO Dario Amodei to the Pentagon. The demand was simple and forceful: by Friday, February 27, allow unrestricted military use of Claude AI for 'all lawful purposes.' Refusal would trigger the cancellation of a $200 million contract, potential designation as a supply chain risk, and possible invocation of the Defense Production Act. It was effectively an ultimatum to a company whose entire identity is built on AI safety.
1. The Pentagon Summons: Hegseth's Friday Deadline
According to NPR and the New York Times, Secretary Hegseth delivered the ultimatum to Amodei in person. The core demand: Anthropic must completely remove its self-imposed military use restrictions and offer Claude under the same 'all lawful purposes' terms as every other AI company.
The trigger was the Venezuela operation in January 2026, when Claude was used in the Maduro capture operation through Palantir's platform. When an Anthropic executive subsequently inquired with Palantir about the circumstances, word reached the Pentagon. From the Defense Department's perspective, having a contracted AI company effectively monitoring military operations from behind the scenes was intolerable.
2. The Pentagon's Three Threat Cards: From Contract Cancellation to Huawei-Level Sanctions
According to Fortune and Axios, the Pentagon is considering three escalation options against Anthropic.
First, immediate termination of the $200 million defense contract. This alone would shake Anthropic's entire government business. Second, designation as a 'supply chain risk' — effectively the same treatment applied to Huawei, which would require all U.S. companies to sever ties with Anthropic. This would devastate partnerships with Palantir and AWS, and potentially impact the civilian customer base. Third, invocation of the Defense Production Act, which grants the government authority to commandeer private sector resources during wartime.
Pentagon CTO Emil Michael characterized Anthropic's military restrictions as 'undemocratic' — arguing that an unelected company should not dictate national security policy. This marked a dramatic shift from the earlier 'we want cooperation' tone.
3. Amodei's Red Lines: Drone Armies and Mass Surveillance
Amodei's principles boil down to two non-negotiables: no mass domestic surveillance and no fully autonomous weapons. He accepts military collaboration in intelligence analysis, logistics optimization, and cyber defense — but these two red lines are immovable.
In his January 2026 essay 'The Adolescence of Technology,' Amodei revealed his most feared scenario: 'A small number of people operating drone armies.' Constitutional protections depend on there being a human who can disobey an illegal order — a safeguard that disappears with fully autonomous weapons. He has donated $20 million to AI regulatory organizations.
But this principled stance increasingly means isolation. OpenAI, Google, xAI, and Meta have all accepted the 'all lawful purposes' condition. Google withdrew from Project Maven in 2018 under employee pressure, only to return to defense contracts in 2025. Anthropic stands alone in its resistance.
4. RSP v3.0: The Development Pause Pledge Disappears
On February 24, the day after the ultimatum, Anthropic released Responsible Scaling Policy v3.0. The timing is striking. The key change: the previous RSP's 'absolute commitment' to halt model development if safety could not be assured has been removed.
Anthropic explained the rationale: the Trump administration's deregulatory stance meant the expected national-level AI safety regulations never materialized. If one company alone pauses development, less safe competing models would dominate the market, creating greater overall risk.
Anthropic emphasized that RSP v3.0 is a separate decision unrelated to military policy. But experts are reading the simultaneous release as a 'signal of concession.' One of the company's most hardline safety principles was voluntarily walked back. Within the AI safety community, concern is spreading that 'principles are starting to crumble in the face of reality.'
5. Anthropic's Response: Walking the Tightrope Between IPO and Principles
Reuters confirmed on February 24 that Anthropic has no plans to ease its military use restrictions. At least officially, the red lines remain.
But the reality is challenging. Anthropic is currently valued at $380 billion and preparing for an IPO. A head-on collision with the U.S. Department of Defense could directly damage investor confidence. If the supply chain risk designation materializes, cascading effects would hit AWS-based government cloud contracts, the Palantir partnership, and potentially civilian enterprise customers.
Meanwhile, the Pentagon faces its own dilemma. One defense official admitted that the other AI model companies are 'just behind.' There is no frontier model ready to immediately replace Claude. This mutual dependency creates room for negotiation.
6. Expert Reactions: Congress Should Step In
Expert reactions have been largely sympathetic to Anthropic while pointing to structural problems. The Electronic Frontier Foundation (EFF) stated that 'human rights principles should not be abandoned under government pressure.' Legal analysis publication Lawfare argued that 'rules for military AI should be set by Congress, not individual companies.' Relying on a single company's moral judgment for national security policy is inherently unstable.
A professor at NYU Stern School of Business called this situation 'a litmus test for whether AI can be deployed responsibly.' If Anthropic holds, it sets a precedent that companies can set ethical boundaries with governments. If it folds, the entire AI safety discourse takes a hit.
South Korean media outlets including News1, Hankook Ilbo, Yonhap News, and MBC have also covered the Pentagon-Anthropic conflict, analyzing how the global debate on military AI use could carry implications for Korea's own defense AI policies.
Conclusion: The Deadline Approaches
Friday, February 27 — Hegseth's deadline — is approaching. Anthropic faces three scenarios: maintain its red lines and absorb contract termination and sanctions; find a compromise through partial concessions; or fully capitulate and accept the same terms as its competitors.
The removal of the development pause pledge from RSP v3.0 that same week could signal that Anthropic has already begun compromising with reality — or it could be a strategic retreat to maintain a harder line specifically on military policy. Either way, the outcome of this deadline transcends a single company's contract dispute. It will be a historic inflection point showing whether the value of AI safety can survive direct pressure from state power.
- NPR - Pentagon gives Anthropic a deadline to allow military AI use
- New York Times - Hegseth Summons Anthropic CEO to Pentagon Over AI Restrictions
- Fortune - Inside the Pentagon's ultimatum to Anthropic
- Reuters - Anthropic says no plans to ease military AI restrictions
- Axios - Pentagon threatens Anthropic with supply chain risk designation
- Wall Street Journal - Anthropic Releases New Safety Policy Amid Pentagon Standoff
- Time - The AI Company That Won't Let the Pentagon Do Whatever It Wants