Claude — the AI I use every single day — had been deployed in American military airstrikes against Iran. That was the first thing I discovered when I started digging into the story.
It started with a podcast I never miss: "News Connect," which I listen to every Sunday. Regular guest Makoto Shiono has spent the past three years or so raising concerns about global geopolitical instability nearly every week, and his commentary has sharpened my own interest in the subject. The Russia-Ukraine war has now dragged on for over three years, and more recently the military confrontation between the United States, Israel, and Iran has thrown the global balance into flux. I wanted to develop a clearer picture of this moment where geopolitics and technology intersect, so I set aside some time to explore it properly.
Anthropic, Claude's developer, is reported to have integrated Claude into the Pentagon's target selection platform known as "Project Maven." According to Fortune's reporting, the system supported strike decisions against more than 15,000 targets. The same tool I use to draft business documents and clean up code was, on the battlefield, executing an entirely different kind of task.
Ukraine: The World's Largest AI Testing Ground

It has now been more than three years since Russia invaded Ukraine.
This war has become the single largest real-world testing ground for cutting-edge technologies like AI and drones. Living in Japan, it is hard to feel this viscerally, but the data paints a stark picture.
- Before AI integration, drone strike accuracy reportedly hovered around 30–50%; AI guidance has reportedly pushed that figure to roughly 80%
- An estimated 70–80% of combat casualties in the Ukraine war are attributed to drones
- In December 2024, the first fully autonomous military operation was reportedly conducted in northern Kharkiv
Greater precision means fewer munitions are needed to inflict more reliable harm. Combat without a human pulling the trigger has already begun.
The combination of drones and AI has also created a profound cost asymmetry between attacker and defender.
According to CSIS analysis, a single Iranian Shahed drone costs approximately $30,000–$50,000, while a single Patriot missile to intercept it runs $3–4 million — a cost ratio of roughly 85 to 1.
Send enough cheap drones, and the defender has no choice but to burn through expensive missiles. To rebalance that equation, Ukraine has developed and deployed low-cost interceptor drones at scale, priced at $3,000–$5,000 each. None of this technological adaptation would be feasible without AI.
Note: Cost estimates for Shahed drones vary by source, with some reports citing $20,000–$50,000 per unit. Ukrainian interceptor drones also range widely, with entry-level models around $1,000 and higher-end models in the $3,000–$5,000 range.
So why has AI evolved so rapidly in Ukraine specifically? The answer is simple: the speed of battlefield feedback.
Ukrainian engineers iterate on their algorithms week by week, then see results in actual combat the following week. Depending on those results, casualty numbers shift.
A CSIS report quoted an engineer saying that U.S. companies "don't have the same opportunity."
The conventional wisdom is that Silicon Valley sits at the frontier of global high technology, concentrating the world's most talented engineers. But no matter how skilled those engineers are or how generous their budgets, the real-time feedback loop where lives are on the line simply cannot be replicated in peacetime.
Battlefield Data Flowing Out to the World
The data accumulated on the battlefield is now being opened up to the broader world.
- Brave1 Dataroom (January 2026): A platform launched by Palantir and Ukraine's Ministry of Defense enabling AI model training using combat footage, thermal imaging data, and drone flight logs
- Battlefield Data Sharing with Allies (March 2026): Ukraine opened its battlefield data to allied nations — reportedly a first-of-its-kind initiative globally
- UNITE-Brave NATO: A NATO-announced innovation program on the order of €10–50 million has been launched
The battlefield is shifting from "a place where individual nations develop their AI" to "a system where Ukraine and the broader Western alliance pool data and evolve AI together."
Why the Battlefield Is the Fastest Place for AI to Evolve
The fact that cutting-edge technology advances most rapidly in wartime is a pattern that has repeated itself throughout history — it rhymes across centuries.
- The internet was born as ARPANET, designed for military purposes
- GPS originated as a military positioning system; drones trace their lineage to military reconnaissance aircraft
The pressure of "fail and people die" makes the battlefield the most powerful driver of technological evolution. AI is being applied in medicine and finance too, but nowhere else does failure in life-or-death situations produce instant feedback with a demand for improvement the following week. No other development environment exists like it.
Military strategy has long used the concept of the OODA Loop:
- Observe
- Orient
- Decide
- Act
Whoever cycles through this loop faster wins. AI is now compressing the "Orient" and "Decide" phases from hours down to seconds.
- DELTA system: Ukraine's AI target identification system, reportedly identifying 2,000+ enemy targets per day
- Avengers AI Platform: Said to identify 70% of enemy vehicles in 2.2 seconds
Human life has become the ultimate feedback signal for technology. Success means the enemy falls; failure means your own people die. No commercial KPI comes close to that pressure, which is exactly why technology accelerates on the battlefield. Whether we welcome it or not, this is a fact we have to sit with.
Anthropic vs. the U.S. Government

On February 24, Secretary of Defense Hegseth issued Anthropic an ultimatum: grant unrestricted use of Claude's AI models for any lawful purpose by February 27. This included AI-powered mass domestic surveillance and fully autonomous weapons systems.
On February 26, Anthropic — under the name of CEO Dario Amodei — published a public statement refusing the demand. Their objections centered on two points:
- Mass domestic surveillance: "AI-driven mass surveillance is incompatible with democratic values." They could not accept a state in which citizens' movement records and browsing data are collected without warrants.
- Fully autonomous weapons: "Current AI systems are not reliable enough to underpin fully autonomous weapons systems." AI hallucinates and cannot yet be fully trusted.
The next day, President Trump barred federal agencies from using Anthropic's products and designated the company a "supply chain risk" — a label typically reserved for foreign entities. Anthropic subsequently sued for an injunction. The government turned to OpenAI as an alternative.
This is arguably the first case in history where an AI developer was confronted with the question of where to draw the line — and refused to cross it. Yet partial integration had already occurred, and Claude was ultimately used in the Iran strikes. And when Anthropic said no, the government simply found another willing partner. A single company's ethical stance gets replaced by a different company; the underlying problem doesn't go away. That is precisely what points toward the need for regulation.
The Ethical and Regulatory Vacuum

A senior Ukrainian defense official was reportedly quoted as saying: "What we need is something that can kill the enemy efficiently."
It sounds brutal. But coming from a country defending itself against invasion, it is disturbingly easy to understand. They simply cannot afford to pause for an ethics discussion.
At the same time, the International Committee of the Red Cross (ICRC) has repeatedly sounded warnings about lethal autonomous weapons systems. A Rest of World article cited concerns about "crossing an irreversible moral threshold." In a world where AI kills without human judgment, who bears responsibility? The system's designers? Military commanders? Or no one at all?
Regulation has not kept pace with AI's evolution. But in the battlefields where lives are being traded, that regulatory vacuum is being exploited — and AI is making life-or-death decisions at this very moment.
The Distance Between This Reality and Everyday Life
If I am being honest, the battlefields of Ukraine and Iran feel very far from my daily life.
But working through these questions, researching them, and writing this piece, I came back to something: the importance of imagining what lies behind the tools we use. AI is not a neutral instrument. Behind it sits an organization's decisions, the terms of its contracts, the actual uses it is put to, the environments in which it was sharpened — all of it, always there in the background.
As a user, I cannot know everything or control everything. But I can at least stay conscious of what kind of company built a given tool and what principles guide it. I can periodically ask what is driving AI's rapid evolution.
Just as the internet once did, if the place where technology advances fastest is the battlefield, then the technological progress we enjoy may, somewhere, be the product of an exchange for someone's life. I want to hold onto that imagination, at minimum.
References
- Fortune: Iran war, Trump strikes, Anthropic AI used in Pentagon "Speed of Thought"
- CEPA: Ukraine's AI Drones Hunt the Enemy
- CSIS: Ukraine's Future Vision and Current Capabilities for AI-Enabled Autonomous Warfare
- IEEE Spectrum: Autonomous Drone Warfare
- CSIS: Unpacking Iran's Drone Campaign in the Gulf
- Defense News: Novel Interceptor Drones Bend Air Defense Economics in Ukraine's Favor
- Digital State Ukraine: Brave1 Dataroom with Palantir
- Military Times: Ukraine Opens Battlefield AI Data to Allies
- NATO: UNITE-Brave NATO Joint Initiative
- Atlantic Council: Missiles, AI, and Drone Swarms — Ukraine's 2025 Defense Tech Priorities
- TechPolicy Press: A Timeline of the Anthropic-Pentagon Dispute
- Anthropic: Statement — Department of War
- Rest of World: Anthropic AI and Iran Drone Warfare
- News Connect (Podcast)






