316
Audio & Video Production296
Software Development227
Automation & Workflow196
Writing & Content Creation181
Marketing & Growth170
AI Infrastructure & MLOps140
Design & Creative146
Photography & Imaging135
Data & Analytics107
Voice & Speech122
Education & Learning117
Sales & Outreach106
Customer Support107
Research & Analysis85
A book and reporting describe how Project Maven helps the US military turn many data sources into faster target selection, raising concerns about speed and errors.
In short: A new book and reporting describe how Project Maven, an AI system used by the US military, has helped speed up how targets are found and approved for strikes.
Project Maven began in 2017 as an effort to use AI to review drone video. AI here means software that can spot patterns, like highlighting objects in images the way a phone app can recognize faces. The goal was to reduce the time humans spent staring at footage and to help teams act faster.
According to journalist Katrina Manson’s book, the work first involved Google, but employee protests in 2018 pushed Google to leave. The system later grew with Palantir as a main builder, and it uses technologies linked to several major tech companies, including Microsoft and Amazon. The Verge report also says NATO has recently purchased the Maven Smart System.
Maven now pulls together many kinds of information, including satellite images, radar, and even social media, and then helps military staff move through the steps of targeting. It is described as speeding up the “kill chain,” meaning the sequence from spotting something to deciding to strike it. An official quoted in the report says tasks that once took hours can take seconds, and that the number of daily targets could rise sharply when large language models (AI text systems, like an autocomplete on steroids) are added.
The report highlights concerns that faster systems can make mistakes more deadly, especially if the underlying data is wrong or outdated. It also describes ongoing military work toward weapons that can select and attack targets on their own, which would raise new questions about human control and accountability.
Source: The Verge AI