U.S. Military Uses Anthropic Claude and Palantir AI to Strike 1,000 Targets in Iran
Image: The Conversation

U.S. Military Uses Anthropic Claude and Palantir AI to Strike 1,000 Targets in Iran

11 March, 2026.Iran.2 sources

Key Takeaways

  • U.S. military used Anthropic's Claude and Palantir's Maven for targeting in Iran
  • AI-enabled systems supported rapid, large-scale targeting operations against Iranian targets
  • Experts raised reliability and accountability concerns and urged retention of human judgment

Scale and tools used

Overview: Several sources report that in the opening 24 hours of U.S. strikes on Iran the military struck roughly 1,000 targets, a pace that outlets linked to the use of advanced AI tools.

But as an independent nonprofit organization, our operations depend on the support of readers like you

Bulletin of the Atomic ScientistsBulletin of the Atomic Scientists

The Conversation cites reporting that “the U.S. military was able ‘to strike a blistering 1,000 targets in the first 24 hours of its attack on Iran’” and notes the military has “used Claude, the AI tool from Anthropic, combined with Palantir’s Maven system, for real-time targeting and target prioritization.”

Image from Bulletin of the Atomic Scientists
Bulletin of the Atomic ScientistsBulletin of the Atomic Scientists

The Bulletin of the Atomic Scientists similarly states “With the help of artificial intelligence, the United States struck 1,000 Iranian targets in the first 24 hours of the war that began on February 28,” framing the scale of strikes alongside the reported integration of AI into targeting workflows.

Decision support, not autonomy

Nature of the systems: Analysts distinguish Claude and similar tools as decision-support systems rather than autonomous weapons, but emphasize they are embedded in targeting pipelines.

The Conversation explains that “Claude is an example of a decision support system, not a weapon” and that Claude is “embedded in the Maven Smart System, used widely by military, intelligence and law enforcement organizations,” with those applications providing “analytical and planning support, but human beings ultimately make the decisions.”

Image from The Conversation
The ConversationThe Conversation

The Bulletin highlights uncertainty over how recommendations from such systems relate to human review: “if Claude is being used for targeting decisions, it remains uncertain what the relationship is between human review of strike targets vis a vis the system’s recommendation.”

Evaluation shortfalls

Evaluation and governance gaps: Commentators warn that the rapid integration of AI into targeting exposes gaps in evaluation, reliability thresholds, and accountability.

But as an independent nonprofit organization, our operations depend on the support of readers like you

Bulletin of the Atomic ScientistsBulletin of the Atomic Scientists

The Bulletin argues “it remains uncertain what precise evaluation metrics or thresholds were used to determine that the system was reliable enough to use in the context of use of force operations,” and calls for policymakers to “work with AI firms, researchers, and other stakeholders to construct a better evaluation infrastructure” including “robust metrics and tests for meaningful human control.”

The Conversation underscores that the U.S. military’s ability to use AI is built on “many decades of investment and experience,” stressing organizational capacity matters for effective and safe deployment.

Civilian casualties and law

Civilian harm and legal risks: Reporting links AI-enabled targeting to acute humanitarian and legal concerns amid civilian casualties.

The Bulletin notes ongoing investigations and distressing casualty reports, citing “reports about strikes on a school which killed at least 165 people, including many children, along with reports from human rights groups that more than 1,000 civilians have so far died in Iran during ongoing operations,” and warns that limited human oversight “could result in international blowback against the United States.”

Image from The Conversation
The ConversationThe Conversation

The Conversation’s emphasis that humans are the ultimate decision‑makers intersects with this: if human review is weak or rushed, decision‑support tools can still contribute to harmful outcomes.

Tactics vs strategy

Strategic implications: Experts caution that tactical speed enabled by AI does not substitute for strategy and may produce mismatches between battlefield effects and political aims.

But as an independent nonprofit organization, our operations depend on the support of readers like you

Bulletin of the Atomic ScientistsBulletin of the Atomic Scientists

The Bulletin reminds readers that “speed itself can lead to mismatch between tactical gains and strategic goals” and that more advanced tools “will further challenge already complicated issues of human control and accountability.”

Image from Bulletin of the Atomic Scientists
Bulletin of the Atomic ScientistsBulletin of the Atomic Scientists

The Conversation situates current systems within a long history of automation—from SAGE to modern battle‑management tools—and argues that decision support systems now “augment the brain,” meaning organizational choices about oversight and integration will shape whether fast AI-enabled targeting produces lasting political success or costly strategic failures.

More on Iran