AI Ethics Clash: Silicon Valley Unites Against Government Overreach

AI Ethics Clash: Silicon Valley Unites Against Government Overreach

After Trump's federal ban on Anthropic, competitors including OpenAI and Google employees rallied to Anthropic's side. Over 500 Big Tech employees signed public petitions, xAI alone complied with the Pentagon, and the largest tech worker political action since 2018 is unfolding.

On February 27, 2026, when President Trump announced a full federal ban on Anthropic, something unexpected happened in Silicon Valley. Competitors rallied to Anthropic's side. From OpenAI's Sam Altman to hundreds of Google employees, a solidarity movement has formed that transcends the usual competitive dynamics of the AI industry.

This is not simply an alliance between companies. It is the largest tech worker political action since Google's Project Maven in 2018, and an industry-wide backlash against the framing of AI safety guardrails as 'political positions.' A moment has arrived that is fundamentally reshaping the relationship between Silicon Valley and the U.S. government.

1. Competitors Supporting Competitors: Altman's 'Same Red Lines' Declaration

OpenAI CEO Sam Altman AI military use red lines Anthropic solidarity declaration
OpenAI CEO Sam Altman declared his company shares Anthropic's red lines on military AI.

The most notable response came from OpenAI. CEO Sam Altman stated clearly on CNBC: "We share the same red lines." Anthropic's biggest competitor had publicly taken its side. Altman reportedly communicated this position to employees through an internal memo as well.

The weight of this statement becomes clear only with context. OpenAI quietly removed its military use ban in January 2024 and strengthened its relationship with the government by integrating ChatGPT into the Pentagon's GenAI.mil platform. For that same OpenAI to invoke the word 'red lines' again signals that the Trump administration's demands have reached a level the entire industry finds unacceptable.

The key driver is not goodwill toward Anthropic but self-preservation. If Anthropic kneels today, the same demands will come to OpenAI tomorrow. Silicon Valley has begun to recognize this clearly.

2. Google Employee Revolt: Memories of Project Maven Resurface

The backlash at Google has been even more intense. Over 100 employees sent a formal letter to AI chief Jeff Dean, demanding restrictions on Gemini's military use. The letter's central agenda was restoring the 'weapons and surveillance prohibition principles' that Google had already quietly removed in February 2025.

This echoes the 2018 Project Maven affair. Back then, Google participated in the Pentagon's drone footage analysis project only to face a revolt from over 4,000 employees, ultimately abandoning the contract. That memory is being resurrected now.

The difference lies in scale and context. In 2018, it was Google's crisis alone. In 2026, employees at Microsoft and Amazon are also making similar demands to their own executives. Concerns over military AI use have spread beyond a single company to encompass the entire industry.

3. 'We Will Not Be Divided': 500 People Sign With Their Real Names

OpenAI Google employees We Will Not Be Divided open petition opposing Pentagon military AI demands
OpenAI and Google employees signed the 'We Will Not Be Divided' petition with their real names.

The most symbolic expression of industry solidarity is the 'We Will Not Be Divided' open petition. Published on notdivided.org, it has been signed by nearly 500 Big Tech employees using their real names, including 421 from Google and 76 from OpenAI.

The petition's core message is clear: "The Pentagon is trying to divide each company through fear." There is a widespread perception that the Department of Defense is employing a 'divide and conquer' strategy, pressuring companies one by one into compliance. Indeed, the federal ban on Anthropic could have served as a warning to other companies: 'this could happen to you.'

But it backfired. Instead of cowering before the threat, it rallied competitors' employees to Anthropic's side. This is being recorded as the largest tech worker political action since Project Maven in 2018.

4. xAI, the Sole Compliant: Accepting 'All Lawful Purposes'

Within this industry-wide solidarity, there is exactly one exception: Elon Musk's xAI. On February 23, xAI signed a classified system contract for Grok, agreeing to 'all lawful purposes.' It is the only major AI company to fully accept the Pentagon's terms.

xAI's compliance is inseparable from Musk's political positioning. As a key supporter of the Trump administration and head of the Department of Government Efficiency (DOGE), conflict with the Pentagon is an unthinkable option for Musk. This dynamic is creating a new fault line in the AI industry — a gap between the government-compliant camp (xAI) and the principles-first camp (Anthropic and its allies).

Former Uber VP Emil Michael attacked Dario Amodei as a 'liar' with a 'God complex,' siding with the Pentagon. However, such personal attacks have had the opposite effect, strengthening sympathy for Anthropic.

5. Trump's AI Policy: A Fundamental Break from Biden

Understanding the current conflict requires examining the Trump administration's AI policy stance. The Biden administration emphasized AI safety and regulation through Executive Order 14110. Trump repealed this order immediately after taking office and pivoted in the opposite direction with a 'Prevent Woke AI' executive order.

The Trump administration's approach can be summarized along three axes. First, nullifying state-level AI regulations at the federal level. Second, framing AI safety guardrails as 'ideological bias (woke AI).' Third, demanding unlimited government access to AI models.

This framing is being led by David Sacks, the AI/Crypto policy czar. Sacks defines AI safety guardrails as 'leftist ideology,' packaging their removal not as deregulation but as 'ideological correction.' However, allegations of conflicts of interest stemming from Sacks' own investments in AI companies are raising questions about the purity of this framing.

6. Congress and Civil Society: Bipartisan Voices of Concern

U.S. Congress AI hearing lawmakers military AI regulation bipartisan legislation concerns
A U.S. congressional hearing on AI. Calls are growing for Congress to set rules for military AI use.

This conflict is not limited to Silicon Valley versus the executive branch. Voices of criticism are emerging from Congress and civil society as well. Four bipartisan senators sent a formal letter to Defense Secretary Hegseth, demanding clear standards on the scope and limits of military AI use.

The liberal-leaning Brennan Center warned about the dangers of AI surveillance from a Fourth Amendment perspective. Notably, even the conservative-leaning Cato Institute has criticized the Pentagon's approach. Concerns about AI safety are becoming a transpartisan issue.

Lawfare also published analysis arguing that "Congress must set the rules for military AI." Currently, the executive branch is unilaterally defining the scope of military AI use, and the criticism is that decisions of this magnitude being made without congressional legislation poses a fundamental question of democratic legitimacy.

7. Industry Impact: A World Where AI Safety Becomes 'Political'

The most dangerous precedent this crisis is setting is that AI safety guardrails are beginning to be classified not as technical judgments but as 'political positions.' President Trump referred to Anthropic as 'left-wing lunatics,' and Defense Secretary Hegseth declared he 'won't be held hostage to Big Tech's ideological whims.'

If this framing solidifies, AI companies' safety research itself could be transformed into a political risk. Setting guardrails on autonomous weapons would be interpreted as 'anti-government,' and expressing concerns about mass surveillance would be branded as 'ideological.'

The practical industry impact is also significant. Government contracts are claiming an ever-larger share of AI companies' revenue structures. As Anthropic's case demonstrates, refusing government terms can result in the extreme sanction of a federal use ban. This precedent significantly weakens the economic incentives for other AI companies to maintain their safety principles.

Conclusion: Silicon Valley Chose Solidarity Over Division

The Pentagon sought to isolate Anthropic, but the result was the opposite. Threats bred not fear but solidarity, producing an unprecedented scene of competitors uniting on principle rather than competition. The 500 real-name signatures demonstrate that this solidarity is rooted not merely in executives' strategic calculations but in the moral conviction of front-line engineers.

Yet how long this solidarity can endure remains uncertain. When the economic realities of government contracts, shareholder pressure, and the national security logic of AI competition with China converge, principles can quickly become subjects of compromise. xAI's compliance already shows that possibility.

One thing is certain: decisions about AI safety are no longer an internal matter for technology companies. At the intersection of congressional legislation, civil society oversight, and above all the moral judgment of the people who build this technology, the future of AI is being decided.

Menu