
You turn to us for voices you won't hear anywhere else.
Key Takeaways
- Pentagon's Project Maven uses Palantir tech and Anthropic's Claude AI.
- Israel has deployed AI targeting programs in Iran, Gaza, and Lebanon.
- The report ties AI use to ongoing U.S.-Israel actions against Iran.
AI kill chain overview
As the U.S.-Israeli war on Iran continues, we look at how the Pentagon is using artificial intelligence in its operations.
The system, known as Project Maven, relies on technology by Palantir and also incorporates the AI model Claude built by Anthropic.

Israel has used similar AI targeting programs in Iran, as well as in Gaza and Lebanon.
Craig Jones, an expert on modern warfare, says AI technology is helping militaries speed up the “kill chain,” the process of identifying, approving and striking targets.
“You’re reducing a massive human workload of tens of thousands of hours into seconds and minutes.
You’re reducing workflows, and you’re automating human-made targeting decisions in ways which open up all kinds of problematic legal, ethical and political questions,” says Jones.
School strike and AI role
Transcript AMY GOODMAN: As the U.S. and Israeli war extends into its 19th day, we turn now to look at how the U.S. is using artificial intelligence to identify and prioritize targets.
The system, known as Project Maven, was created by Palantir, and it incorporates the AI model Claude, built by Anthropic.

The Pentagon is investigating if the AI system played a role in the U.S. strike on the Iranian girls’ school that killed over 170 people, mostly girls.
This is CENTCOM Commander Admiral Brad Cooper talking about the use of AI in Iran.
ADM. BRAD COOPER: Our war fighters are leveraging a variety of advanced AI tools.
These systems help us sift through vast amounts of data in seconds so our leaders can cut through the noise and make smarter decisions faster than the enemy can react.
Humans will always make final decisions on what to shoot and what not to shoot and when to shoot, but advanced AI tools can turn processes that used to take hours, and sometimes even days, into seconds.
AMY GOODMAN: Israel has used similar AI targeting programs in Iran, as well as in Gaza and Lebanon.
The Pentagon also reportedly used the AI tools during the recent military attack on Venezuela when U.S. Special Forces abducted the Venezuelan President Nicolás Maduro and his wife, Cilia Flores.
This comes as a major rift has emerged between Anthropic and the Pentagon after Anthropic moved to restrict the use of its technology for mass surveillance of Americans and for fully autonomous weapons.
In late February, President Trump ordered federal agencies to stop using Anthropic products.
Defense Secretary Pete Hegseth declared the firm a supply chain risk, effectively cutting it off from government contracts and related work.
It marked the first time the Pentagon has designated a U.S. company as a supply chain risk, prompting Anthropic to sue.
On Tuesday, CNN reported that nearly 150 retired federal and state judges have filed an amicus brief supporting Anthropic in its lawsuit against the Trump administration.
We’re joined now by Craig Jones, senior lecturer in political geography at Newcastle University, author of The War Lawyers: The United States, Israel, and Juridical Warfare.
He’s the co-author of a new article in The Conversation headlined “Iran war shows how AI speeds up military 'kill chains.'”
Why don’t we start there, Professor Jones?
CRAIG JONES: Thank you.
Yeah, I mean, the U.S. military, the Israeli military, as your headlines have said, using AI, the kill chain is a bureaucratic mechanism whereby militaries go from trying to designate targets, to identify enemies and military targets, to the process of actually killing them.
They’re in the process across the 20th century, early 21st century, of speeding that process up.
Military drones have helped greatly with that.
And the latest front of that is AI.
As Bradley Cooper talked about, you’re reducing a massive human workload of tens of thousands of hours into seconds and minutes.
You’re reducing workflows, and you’re automating human-made targeting decisions in ways in which, I think, you know, open up all kinds of problematic legal, ethical and political questions.
AMY GOODMAN: The U.S.-Israel war in Iran is being described as the first AI war. Explain what that means, Craig.
CRAIG JONES: Yeah, I would say it’s not quite the first AI war.
As you mentioned, Israel has used AI in Gaza.
I think this was the first major use of AI in warfare.
I think, actually, the history goes back a little longer, with computer programs partially enabled with AI have been used in the background of military systems for several years now.
It was used in a major way in Gaza in the first few months, where we saw tens of thousands of targets put in a target bank opted by military intelligence.
Up to 35,000 suspected Hamas combatants found themselves on this list as Israel worked through that to assassinate them, as well as tens of thousands of targets that are ultimately part of the civilian infrastructure.
As you’ve said, the U.S. has used it with Maduro, and now Israel and the U.S. are also using these systems in Iran.
The key innovation here is twofold.
It is the use of AI for intelligence analysis.
Intelligence, military intelligence, is multi-format.
There is so much of it.
It hoovers up what they call signals intelligence, so mobile phones, internet traffic, SMS, mobile phone tracking, all kinds of things.
And the AI systems are being used to spot what militaries call patterns of life — you know, who meets with who, who talks with who, what are the nature of the messages, how are they interacting in ways which are deemed suspicious.
And the AI systems look for those patterns and make recommendations, which is the second innovation, for targets.
They nominate targets to this bank of targets, which then has — which we can talk about — some technical human oversight.
And that’s problematic, I think.
It’s problematic because that’s a really persuasive technology.
It’s nominating hundreds, thousands of targets potentially a day, and it’s working at speeds which are just beyond, you know, the evolution of human cognition in, again, ways that are problematic.
AMY GOODMAN: Can you explain — I mean, this is being investigated by everyone, including the U.S. government and the Pentagon — how Palantir was used, it’s believed, in the first strikes, the first day of the U.S.-Israeli war on Iran, may have been involved in the targeting of a girls’ school in southern Iran using the tools of Palantir and Claude, which is a property of Anthropic?
CRAIG JONES: Yeah, so, this strike on the girls’ school is at the moment the leading kind of civilian casualty incident, in which around, as you’ve said, 170, mainly girls, were killed, innocent civilians.
At the start, we should remember some of the history of this.
It was denied by the U.S. military.
Trump insinuated at one point that it was an Iranian missile.
It was later verified that it was indeed a U.S. series of Tomahawk missiles that struck this area.
And a U.S. preliminary investigation has now found and confirmed indeed what many people thought, which was that U.S. is responsible.
It looks — we’re not yet clear the role of AI in that particular strike.
Whether that becomes clear in the coming days and weeks, we’ll have to see.
What we do know is that the Claude and Anthropic model by Palantir have been extensively used to do several things, including the intelligence analysis.
So we can deduce that that AI system is not yet capable of detecting, or is at least, you know, open to making systemwide errors.
It did not identify the school as a school, in an extremely problematic way in which, you know, within a couple of days, organizations such as The New York Times are able to verify via satellite imagery that there is a wall that’s been put up around 13 years ago between the school and a IRGC compound that was nearby.
If you’d have been watching drone footage from above, as militaries have the capability to do, just for, you know, half an hour before or a few hours before, you would have seen, you know, that morning 170 girls dropped off by their parents, and that would have been identified as a nonmilitary target with clearly civilian usage.
AMY GOODMAN: But let’s get — CRAIG JONES: So, we don’t yet — AMY GOODMAN: Let’s drill down into this, because, yes, there was this military facility right next to it. As you described, years ago, a wall was built between the two, so you’ve got the school very clearly identified. But how does AI work, where you have this old, what, 10-year-old perhaps, information about it being a military base that’s fed in, and then it is never updated? Where do human beings come into this?
CRAIG JONES: Yeah, this is a really important question, where it, you know, gets tricky. But we could — we know a lot already. So, it looks like it’s just an intelligence failure, that an area marked on a map, this is — you know, the whole entire area has been marked as a military compound.
There is obligations, you know, legal obligations and ethical obligations, and just political obligations, within defense intelligence agencies to check this.
And what happens is, some of these targets are nominated from U.S. military bases back in the United States.
Some of those people I’ve worked with over the last several years on what that — what they call target nomination, what it looks like.
They hand that over to CENTCOM, who I know you cover.
And they have bases in the Middle East.
There’s a central one based in Qatar, where these targeting decisions are executed.
There is an obligation for CENTCOM to check and double-check that intelligence, that it’s up to date, that everything’s kosher on the target.
It’s clear that that was not done, whether that was — you know, there should be a human oversight of that, even if it’s AI-recommended or even if it’s human-recommended.
There should be some human intelligence checking.
It looks like, for whatever reason — and we don’t yet know why.
So, what happens also is a really interesting technicality, is everything in a society that the U.S. military is targeting de facto is labeled on a no-strike list, because everything is assumed to be civilian.
And in order to strike it, you need to put it — get it off the no-strike list to be able to target it.
So, the question here is: Why was this school taken off a no-strike list, deemed a legitimate military target?
It looks like a combination of AI and human intelligence failure, to produce something, you know, truly catastrophic.
AMY GOODMAN: And talk about how Palantir interacts with Claude, which is owned by Anthropic, especially for the Luddites who are listening all over, for people who don’t quite understand how this all works.
CRAIG JONES: So, yeah, from what we know, Palantir is a system, much like a deep software system that — you know, like a video game, that has all kinds of inputs, that you can look at targets.
You have all kind of variables, like, you know: What size missile should we drop? What is the compound that we’re looking at? What’s it made out of?
All these human — these variables with intelligence overlays.
And then, in the same way that software works on a computer is the Claude is that thing which is in the background, which is kind of, you know, doing the processing of that data, making those recommendations.
And then it provides the human some parameters that the human or operator or targeteer can then kind of play with.
Obviously, it’s highly sensitive and secretive, and beyond the very few people using it, you know, the designers even with Anthropic would be a very small amount of people who have the intelligence clearance and who’ve seen this stuff working with sensitive military data.
We know from some of the things they’ve released, like the demos that they’ve released, we can see some of what that looks like.
And one of the most worrying developments that I’ve seen, and from what’s publicly available, is the lack of attention and ability to track civilian casualties within those programs.
And that is something which we’ve seen.
You know, this war on lawyers and war on civilian casualty harm, you know, that the administrations have built for several years in the U.S. Department of Defense, has been eroded by the Trump administration, and you actually see that now programmed into the software.
AMY GOODMAN: This is Palantir CEO Alex Karp, interviewed on CNBC last week.
ALEX KARP: These technologies are dangerous societally.
The only justification you could possibly have would be that if we don’t do it, our adversaries and — will do it, and we will be subject to their rule of law.
So, if you decouple this from the support of the military, you’re going to have an enormous problem explaining to the American people why is it that we’re absorbing the risk of disrupting the very fabric of our society, including the most powerful parts of our society, if it’s not because it’s about maintaining our ability to be American in the near term and long term.
AMY GOODMAN: Craig Jones, if you can respond to the CEO of Palantir?
CRAIG JONES: Palantir has a long history of making serious tens of millions, billions of profit from what ultimately I see as killing people in faraway lands that are too easy not to care about.
I think this latest endeavor is as we’ve kind of started this AI arms race.
It’s been good to see at least Anthropic throw their hands up and say, “We want some ethical parameters put on that.”
But even that, which seems to be, you know — and meanwhile, as that whole controversy has been playing out, as you covered, with the Trump administration, we see Sam Altman from OpenAI rush in and take the contract that Anthropic has ultimately dropped.
Huge profits.
They’re a huge — the DOD is a — Department of War is a huge customer for many Silicon Valley firms.
We’ve seen Microsoft use their platforms for the Israeli targeting.
Apparently, Microsoft are looking into that.
We see Google AI analytics also used for Palantir and for U.S. DOD contracts.
This is huge money.
And I think, you know, should the Silicon Valley community wake up to, ultimately, the consequences of the technologies which they’re working on, and see their effects on the ground — which is where I work, with the people who have lost entire families, who’ve had their homes destroyed, who have been displaced, who have, you know, had their legs blown off — there’s this real disconnect between those tens of billions being made for profits of war and those people who suffer its consequences.
AMY GOODMAN: This is OpenAI CEO Sam Altman, who you mentioned, speaking at the India AI Impact Summit in New Delhi in February.
SAM ALTMAN: We don’t
Summary
CRAIG JONES: Thank you. Yeah, I mean, the U.S. military, the Israeli military, as your headlines have said, using AI, the kill chain is a bureaucratic mechanism whereby militaries go from trying to designate targets, to identify enemies and military targets, to the process of actually killing them. They’re in the process across the 20th century, early 21st century, of speeding that process up. Military drones have helped greatly with that. And the latest front of that is AI. As Bradley Cooper talked about, you’re reducing a massive human workload of tens of thousands of hours into seconds and minutes. You’re reducing workflows, and you’re automating human-made targeting decisions in ways in which, I think, you know, open up all kinds of problematic legal, ethical and political questions. AMY GOODMAN: The U.S.-Israel war in Iran is being described as the first AI war. Explain what that means, Craig. CRAIG JONES: Yeah, I would say it’s not quite the first AI war. As you mentioned, Israel has used AI in Gaza. I think this was the first major use of AI in warfare. I think, actually, the history goes back a little longer, with computer programs partially enabled with AI have been used in the background of military systems for several years now. It was used in a major way in Gaza in the first few months, where we saw tens of thousands of targets put in a target bank opted by military intelligence. Up to 35,000 suspected Hamas combatants found themselves on this list as Israel worked through that to assassinate them, as well as tens of thousands of targets that are ultimately part of the civilian infrastructure. As you’ve said, the U.S. has used it with Maduro, and now Israel and the U.S. are also using these systems in Iran. The key innovation here is twofold. It is the use of AI for intelligence analysis. Intelligence, military intelligence, is multi-format. There is so much of it. It hoovers up what they call signals intelligence, so mobile phones, internet traffic, SMS, mobile phone tracking, all kinds of things. And the AI systems are being used to spot what militaries call patterns of life — you know, who meets with who, who talks with who, what are the nature of the messages, how are they interacting in ways which are deemed suspicious. And the AI systems look for those patterns and make recommendations, which is the second innovation, for targets. They nominate targets to this bank of targets, which then has — which we can talk about — some technical human oversight. And that’s problematic, I think. It’s problematic because that’s a really persuasive technology. It’s nominating hundreds, thousands of targets potentially a day, and it’s working at speeds which are just beyond, you know, the evolution of human cognition in, again, ways that are problematic. AMY GOODMAN: Can you explain — I mean, this is being investigated by everyone, including the U.S. government and the Pentagon — how Palantir was used, it’s believed, in the first strikes, the first day of the U.S.-Israeli war on Iran, may have been involved in the targeting of a girls’ school in southern Iran using the tools of Palantir and Claude, which is a property of Anthropic? CRAIG JONES: Yeah, so, this strike on the girls’ school is at the moment the leading kind of civilian casualty incident, in which around, as you’ve said, 170, mainly girls, were killed, innocent civilians. At the start, we should remember some of the history of this. It was denied by the U.S. military. Trump insinuated at one point that it was an Iranian missile. It was later verified that it was indeed a U.S. series of Tomahawk missiles that struck this area. And a U.S. preliminary investigation has now found and confirmed indeed what many people thought, which was that U.S. is responsible. It looks — we’re not yet clear the role of AI in that particular strike. Whether that becomes clear in the coming days and weeks, we’ll have to see. What we do know is that the Claude and Anthropic model by Palantir have been extensively used to do several things, including the intelligence analysis. So we can deduce that that AI system is not yet capable of detecting, or is at least, you know, open to making systemwide errors. It did not identify the school as a school, in an extremely problematic way in which, you know, within a couple of days, organizations such as The New York Times are able to verify via satellite imagery that there is a wall that’s been put up around 13 years ago between the school and a IRGC compound that was nearby. If you’d have been watching drone footage from above, as militaries have the capability to do, just for, you know, half an hour before or a few hours before, you would have seen, you know, that morning 170 girls dropped off by their parents, and that would have been identified as a nonmilitary target with clearly civilian usage. AMY GOODMAN: But let’s get — CRAIG JONES: So, we don’t yet — AMY GOODMAN: Let’s drill down into this, because, yes, there was this military facility right next to it. As you described, years ago, a wall was built between the two, so you’ve got the school very clearly identified. But how does AI work, where you have this old, what, 10-year-old perhaps, information about it being a military base that’s fed in, and then it is never updated? Where do human beings come into this? CRAIG JONES: Yeah, this is a really important question, where it, you know, gets tricky. But we could — we know a lot already. So, it looks like it’s just an intelligence failure, that an area marked on a map, this is — you know, the whole entire area has been marked as a military compound. There is obligations, you know, legal obligations and ethical obligations, and just political obligations, within defense intelligence agencies to check this. And what happens is, some of these targets are nominated from U.S. military bases back in the United States. Some of those people I’ve worked with over the last several years on what that — what they call target nomination, what it looks like. They hand that over to CENTCOM, who I know you cover. And they have bases in the Middle East. There’s a central one based in Qatar, where these targeting decisions are executed. There is an obligation for CENTCOM to check and double-check that intelligence, that it’s up to date, that everything’s kosher on the target. It’s clear that that was not done, whether that was — you know, there should be a human oversight of that, even if it’s AI-recommended or even if it’s human-recommended. There should be some human intelligence checking. It looks like, for whatever reason — and we don’t yet know why. So, what happens also is a really interesting technicality, is everything in a society that the U.S. military is targeting de facto is labeled on a no-strike list, because everything is assumed to be civilian. And in order to strike it, you need to put it — get it off the no-strike list to be able to target it. So, the question here is: Why was this school taken off a no-strike list, deemed a legitimate military target? It looks like a combination of AI and human intelligence failure, to produce something, you know, truly catastrophic. AMY GOODMAN: And talk about how Palantir interacts with Claude, which is owned by Anthropic, especially for the Luddites who are listening all over, for people who don’t quite understand how this all works. CRAIG JONES: So, yeah, from what we know, Palantir is a system, much like a deep software system that — you know, like a video game, that has all kinds of inputs, that you can look at targets. You have all kind of variables, like, you know: What size missile should we drop? What is the compound that we’re looking at? What’s it made out of? All these human — these variables with intelligence overlays. And then, in the same way that software works on a computer is the Claude is that thing which is in the background, which is kind of, you know, doing the processing of that data, making those recommendations. And then it provides the human some parameters that the human or operator or targeteer can then kind of play with. Obviously, it’s highly sensitive and secretive, and beyond the very few people using it, you know, the designers even with Anthropic would be a very small amount of people who have the intelligence clearance and who’ve seen this stuff working with sensitive military data. We know from some of the things they’ve released, like the demos that they’ve released, we can see some of what that looks like. And one of the most worrying developments that I’ve seen, and from what’s publicly available, is the lack of attention and ability to track civilian casualties within those programs. And that is something which we’ve seen. You know, this war on lawyers and war on civilian casualty harm, you know, that the administrations have built for several years in the U.S. Department of Defense, has been eroded by the Trump administration, and you actually see that now programmed into the software. AMY GOODMAN: This is Palantir CEO Alex Karp, interviewed on CNBC last week. ALEX KARP: These technologies are dangerous societally. The only justification you could possibly have would be that if we don’t do it, our adversaries and — will do it, and we will be subject to their rule of law. So, if you decouple this from the support of the military, you’re going to have an enormous problem explaining to the American people why is it that we’re absorbing the risk of disrupting the very fabric of our society, including the most powerful parts of our society, if it’s not because it’s about maintaining our ability to be American in the near term and long term. AMY GOODMAN: Craig Jones, if you can respond to the CEO of Palantir? CRAIG JONES: Palantir has a long history of making serious tens of millions, billions of profit from what ultimately I see as killing people in faraway lands that are too easy not to care about. I think this latest endeavor is as we’ve kind of started this AI arms race. It’s been good to see at least Anthropic throw their hands up and say, “We want some ethical parameters put on that.” But even that, which seems to be, you know — and meanwhile, as that whole controversy has been playing out, as you covered, with the Trump administration, we see Sam Altman from OpenAI rush in and take the contract that Anthropic has ultimately dropped. Huge profits. They’re a huge — the DOD is a — Department of War is a huge customer for many Silicon Valley firms. We’ve seen Microsoft use their platforms for the Israeli targeting. Apparently, Microsoft are looking into that. We see Google AI analytics also used for Palantir and for U.S. DOD contracts. This is huge money. And I think, you know, should the Silicon Valley community wake up to, ultimately, the consequences of the technologies which they’re working on, and see their effects on the ground — which is where I work, with the people who have lost entire families, who’ve had their homes destroyed, who have been displaced, who have, you know, had their legs blown off — there’s this real disconnect between those tens of billions being made for profits of war and those people who suffer its consequences. AMY GOODMAN: This is OpenAI CEO Sam Altman, who you mentioned, speaking at the India AI Impact Summit in New Delhi in February. SAM ALTMAN: We don’t
More on Iran

Israel strikes Iran's Pars gas field as Tehran vows revenge
14 sources compared
Trump threatens to blow up South Pars gas field if Iran attacks Qatar again
10 sources compared

UAE Halts Habshan Gas Facility Operations After Debris From Intercepted Missiles
10 sources compared

Iran Threatens Gulf Energy Facilities After Israel Strikes South Pars Gas Field
11 sources compared