Microsoft Asks Federal Court to Block Pentagon's Ban of Anthropic
Image: Milenio

Microsoft Asks Federal Court to Block Pentagon's Ban of Anthropic

11 March, 2026.Technology and Science.2 sources

Key Takeaways

  • Microsoft filed a federal court request to temporarily block the Pentagon's designation of Anthropic.
  • Pentagon labeled Anthropic a 'supply chain risk', threatening its military AI contracts.
  • Microsoft publicly backed Anthropic in its legal dispute with the U.S. government.

Court challenge overview

Federal legal fight: Microsoft has asked a federal court to temporarily block the Pentagon’s designation banning Anthropic, urging a judge to lift the restriction so the parties can pursue a negotiated resolution.

On Tuesday Microsoft backed AI company Anthropic in its legal dispute against the U

GestiónGestión

The Department of Defense last week declared Anthropic’s technology a national-security risk and moved to bar its use in defense projects, which Anthropic called “illegal” and “unprecedented” in a lawsuit against the administration of U.S. President Donald Trump.

Image from Gestión
GestiónGestión

Microsoft’s filing asks a judge to order the temporary lifting of the designation to allow a “more reasoned discussion,” and the company argues a judicial truce would let both sides seek a negotiated resolution rather than immediate exclusion from defense contracting.

Access and red lines

Why the ban happened: the core dispute is over access and usage guarantees.

The Pentagon demanded full, unrestricted access to Anthropic’s models “for any lawful purpose,” requiring defense contractors to certify they do not use Anthropic’s models in Pentagon projects.

Image from Milenio
MilenioMilenio

Anthropic demanded contractual guarantees that its models would not be used in fully autonomous weapons systems or for domestic mass surveillance, and those ethical red lines were a central point of friction in contract talks.

Microsoft’s position

Microsoft’s stance and allies: Microsoft intervened in the case, filing a brief that argues a temporary judicial pause could preserve negotiation space and protect customers and broader AI deployment.

On Tuesday Microsoft backed AI company Anthropic in its legal dispute against the U

GestiónGestión

Microsoft says it shares Anthropic’s ethical limits, including that U.S. AI should not be used for domestic mass surveillance or to start a war without human control.

Microsoft’s filing was joined by other AI developers and a coalition of organizations to underscore wider industry and civil-society concern.

Customer access impact

Commercial and customer implications: Microsoft told customers it has reviewed the designation and concluded Anthropic’s products, including the Claude chatbot, can remain available to customers outside the Department of Defense through platforms such as M365, GitHub and Microsoft’s AI Foundry.

Microsoft also announced it would continue integrating Anthropic’s models into its products.

Image from Milenio
MilenioMilenio

Cloud providers Google and Amazon, which also use Anthropic technology, told their customers Anthropic’s services will remain available outside defense contracts.

Next steps and stakes

What comes next and the stakes: the litigation will determine whether the Pentagon can impose a wide block on Anthropic’s technology for defense work without negotiated safeguards, or whether companies can insist on contractual limits on military and domestic-surveillance use.

On Tuesday Microsoft backed AI company Anthropic in its legal dispute against the U

GestiónGestión

Microsoft warns the designation risks disrupting work for defense contractors and warfighters while Anthropic argues the ban is unlawful.

Image from Gestión
GestiónGestión

The Pentagon declined to comment in filings, leaving the legal process to determine whether a temporary lift or negotiated resolution will proceed.

More on Technology and Science