
Microsoft Asks Court to Block Pentagon's Supply-Chain Risk Ban on Anthropic
Key Takeaways
- Microsoft asked a federal court to temporarily block the Pentagon's 'supply chain risk' designation.
- Pentagon designated Anthropic as a supply-chain risk, barring the company from military contracts.
- AI workers, former military officials, and civil-rights groups filed amicus briefs supporting Anthropic.
Microsoft joins lawsuit
Microsoft filed an amicus brief in federal court in San Francisco backing Anthropic’s request for a temporary restraining order to block the Pentagon’s “supply‑chain risk” designation, formally stepping into the high‑profile legal fight between Anthropic and the U.S. Department of Defense.
“New York (EFE) - Microsoft on Tuesday backed AI company Anthropic in its legal dispute with the U”
Multiple outlets report that Microsoft asked the court to pause the designation while the litigation proceeds, saying the pause is necessary so the court can fully evaluate the case and to avoid immediate disruption to existing contracts and military integrations.

Microsoft's legal argument
In its filing Microsoft argues the Pentagon’s action should be paused because implementing the supply‑chain risk label immediately would force contractors to alter or replace integrated AI components on short notice, risking disruption to Department of Defense systems and imposing broad economic consequences.
Microsoft described using the supply‑chain risk designation to address what it views as a contractual or policy dispute as having “serious economic consequences” that do not serve the public interest, and asked the court for a temporary suspension to allow a more orderly process.

Broad amici coalition
The court filings are backed by a broad coalition beyond Microsoft: 37 researchers and engineers from OpenAI and Google DeepMind, dozens of former military officials, and civil‑rights organizations have submitted amici briefs supporting Anthropic’s request for injunctive relief.
“Microsoft is strongly backing Anthropic by asking a federal court to block the Trump administration’s designation of the AI company as a supply-chain risk”
Media coverage highlights that the group of industry researchers argues Anthropic’s safety 'red lines' reflect widely shared technical concerns about frontier models, while former military signees and other amici express worries about rapid disruption to defense capabilities.
Operational and commercial risks
Microsoft and industry filings emphasize operational and commercial stakes: Anthropic’s models are integrated into products used by contractors, and Microsoft calls Anthropic’s technology a foundational layer of some military offerings, meaning an immediate ban could force rapid reengineering.
Analysts cited in coverage warn of measurable revenue and procurement impacts for Anthropic and partners, and outlets note that some government purchasers have already begun reassessing negotiations after the classification.

Precedent and implications
Observers and outlets frame the dispute as having broader legal and policy implications because the supply‑chain risk authority has rarely been used and — according to reporting — has never before been invoked publicly against a U.S. company.
“More Information San Francisco — Microsoft showed its support for Anthropic by asking a federal court to block the designation by President Donald Trump’s administration that found the artificial intelligence (AI) company to be a supply chain risk”
Coverage highlights a tension between national‑security prerogatives and industry calls to reserve such designations for clear threats rather than as leverage in policy and contracting disagreements, and notes that Microsoft’s amicus filing and the wider coalition signal significant cross‑sector concern about precedent.

More on Technology and Science

Chemical odor forces FAA to halt flights across DC-area airports
27 sources compared

Apple Cuts China App Store Commissions to 25% After Regulator Pressure
25 sources compared
FBI Investigates Hacker Who Uploaded Malware-Laced Games to Steam
12 sources compared

University of Cambridge Researchers Urge Tighter Regulation Of AI Talking Toys For Under-Fives
10 sources compared