312
Audio & Video Production306
Automation & Workflow211
Software Development231
Marketing & Growth191
AI Infrastructure & MLOps142
Data & Analytics121
Writing & Content Creation173
Customer Support119
Sales & Outreach116
Design & Creative136
Operations & Admin89
Voice & Speech121
Photography & Imaging126
Research & Analysis88
US commanders say AI speeds up targeting, but Iran’s geography and recent civilian deaths raise doubts about accuracy and oversight.
In short: The US military says it is using AI to speed up targeting in the Iran war, but reports suggest Iran’s terrain still makes accurate strikes hard and civilian risks remain.
US Admiral Brad Cooper, a commander in the war, said the military is using “advanced AI tools” to process large amounts of information in seconds instead of hours or days. He said this helps the US make decisions faster than Iran can react. He also said a human makes the final decision on any strike.
One concern is whether that human review is enough when choices are made very quickly. A major example cited in reporting is a 28 February 2026 strike on the Shajareh Tayyebeh girls’ school in Minab that killed at least 168 people, mostly schoolchildren. The Washington Post reported that the site appeared on a US target list.
The US military has used systems such as the Maven Smart System, which helps find possible targets in images and other data (like a fast sorting tool that highlights items for review). The exact way these tools are being used in Iran is not fully clear. Reporting also says AI improvements have not solved basic geography problems, like finding hidden or moving targets in rugged areas.
Iran is also using geography to its advantage in other ways, including attacks on exposed data centers in the Gulf, which support the computing needed for many AI services. Wider disruptions, including energy risks around the Strait of Hormuz, could also squeeze the electricity and hardware that data centers depend on. The key question is whether militaries will slow down, add stronger checks, or change rules for how AI is used when civilians could be nearby.
Source: NYTimes