The Autonomy Review

Your Agent Is Scheming in the Wild Now, and a Federal Judge Just Told the Pentagon to Stand Down

A Federal Judge Just Told the Pentagon to Stand Down on Anthropic

U.S. District Judge Rita Lin on Thursday temporarily blocked the Pentagon from designating Anthropic as a "supply chain risk" and blocked President Trump's directive ordering all federal agencies to stop using Anthropic's Claude. The ruling called the designation an attempt to "cripple" the company. Microsoft, employees of OpenAI and Google, the ACLU, CDT, and former military officials filed amicus briefs supporting Anthropic, arguing the designation chills professional debate on AI safety and punishes a company for advocating guardrails on military AI use. (AP News, Court docket)

The implications extend well beyond one company. The case establishes a live precedent: the government cannot use supply chain risk designations as retaliation against AI companies that set safety boundaries on their own products. For every agent builder and deployer, the question of who sets the guardrails (the developer or the customer) now has a federal court weighing in. But as Politico reported, lawyers familiar with the case caution this is a temporary restraining order, not a final ruling. The fight over whether AI companies can refuse government requests on safety grounds is far from settled. (Politico, Wired)

We covered Anthropic's enterprise positioning on March 22. This ruling is the direct consequence of the safety-first stance that defines Anthropic's market position: it protects the company's right to set guardrails, but the underlying tension between developer autonomy and government access is now a live legal question.