
Pentagon Blocks Anthropic's Claude After CTO Emil Michael Says Model 'Pollutes' Defense Supply Chain
Key Takeaways
- Pentagon CTO Emil Michael said Anthropic's Claude would 'pollute' the defense supply chain.
- Michael said Claude's 'constitution' embeds different policy preferences into the model.
- Pentagon's stance publicly rebukes Anthropic, signaling deepening tensions over AI vendors' defense roles.
What happened
The Pentagon’s top technology official, Defense Department CTO Emil Michael, publicly rebuked Anthropic’s Claude models this week, saying the system would “pollute” the defense supply chain and marking Anthropic as a supply-chain risk.
“Table of Contents Earlier this month, the Defense Department took the historic step of labeling Anthropic a supply chain security concern, representing the first instance of an American enterprise receiving such a designation”
CNBC reports Michael said Claude would “pollute” the agency’s supply chain and that Anthropic is “the first American company to publicly be labeled a supply chain risk,” a historic move echoed by Blockonomi, which called it the “historic step of labeling Anthropic a supply chain security concern.”

The Tech Buzz described Michael’s comments as a hard line and the strongest public rebuke yet of a major AI vendor by a senior defense official, saying the statement “signals deepening tensions over which AI companies can be trusted with national security applications.”
Why DOD objected
Michael justified the designation by pointing to Claude’s built-in governance — Anthropic’s public “constitution” — arguing the model contains a distinct set of policy preferences “baked into” its behavior that could compromise military capabilities.
CNBC reported Michael saying the models have “a different policy preference that is baked in,” and Blockonomi quoted his language about the constitution and the model’s “soul,” arguing those embedded policies create supply-chain risk.

The Tech Buzz noted this stance is especially striking because Anthropic has positioned itself as an AI-safety leader, underscoring tensions between safety-oriented model governance and defense requirements.
Operational impact
The designation carries immediate operational consequences: defense contractors and vendors must certify they are not using Claude in Pentagon work, while the Pentagon acknowledges it cannot instantly remove embedded systems.
“Defense Department CTO Emil Michael on Thursday said Anthropic's Claude artificial intelligence models would "pollute" the agency's supply chain because they have "a different policy preference" that is baked in”
CNBC explained the designation “will require defense contractors and vendors to certify that they don't use Claude in their work with the Pentagon,” and Blockonomi described the Pentagon's staged transition plan and Michael’s comment that the agency cannot “just rip out” Anthropic’s technology overnight.
Despite the blacklist, CNBC and Blockonomi reported that some contractors, including Palantir, continue to use Claude and that Claude has been used in at least one U.S. military operation.
Anthropic's response
Anthropic has responded with legal action, suing the administration and calling the supply-chain designation “unprecedented and unlawful,” while arguing the move threatens hundreds of millions in contracts and will cause irreparable harm.
CNBC reported Anthropic sued the Trump administration, calling the government's actions "unprecedented and unlawful," and Blockonomi noted the company’s court filing claims it faces "irreparable" damage with hundreds of millions of dollars of business at risk.

The Tech Buzz also highlighted Anthropic’s aggressive response and framed the episode as a major confrontation between a safety-focused AI company and the Pentagon.
Broader implications
Observers say the move sets a new precedent by applying a supply-chain security label to a domestic AI vendor, raising questions about how defense priorities will intersect with companies that publish safety-focused governance frameworks.
“Table of Contents Earlier this month, the Defense Department took the historic step of labeling Anthropic a supply chain security concern, representing the first instance of an American enterprise receiving such a designation”
CNBC emphasized that Anthropic is the first American company to receive this label, Blockonomi stressed that the classification had been previously reserved for foreign threats, and The Tech Buzz framed the episode as signaling deepening tensions over trust and vendor selection as defense agencies accelerate AI adoption.

More on Technology and Science

Chemical odor forces FAA to halt flights across DC-area airports
27 sources compared

Apple Cuts China App Store Commissions to 25% After Regulator Pressure
25 sources compared
FBI Investigates Hacker Who Uploaded Malware-Laced Games to Steam
12 sources compared

University of Cambridge Researchers Urge Tighter Regulation Of AI Talking Toys For Under-Fives
10 sources compared