The Autonomy Review

OpenAI and Google Workers Rally Behind Anthropic Against the Pentagon, and Your Agent Improves How It Improves Itself

OpenAI and Google Workers Rally Behind Anthropic Against the Pentagon

More than 30 employees from OpenAI and Google, including Google DeepMind chief scientist Jeff Dean, filed an amicus brief this week supporting Anthropic in its legal fight against the U.S. Department of Defense. The Pentagon labeled Anthropic a "supply chain risk" earlier this year, an unprecedented designation against a U.S. company, barring department employees and contractors from using Claude. Anthropic alleges the designation was retaliatory, punishing the company for its public stance on AI safety. On April 8, a federal appeals court declined to lift the label, writing that "the equitable balance here cuts in favor of the government." A separate California court has granted Anthropic a preliminary injunction, and that order is now in effect.

The significance extends well beyond Anthropic. Employees of rival labs publicly backing a competitor against the government signals a shared concern that political leverage over AI companies could reshape the entire industry. The amicus brief warns that "this effort to punish one of the leading US AI companies will undoubtedly have consequences for the United States' industrial and scientific competitiveness." If the supply-chain risk designation survives judicial review, any AI company that takes an inconvenient public position on safety, regulation, or military use could face the same treatment. For builders, investors, and compliance teams, this case sets the precedent for how much independence AI labs retain when governments want them to comply.

References