324
Audio & Video Production315
Software Development230
Automation & Workflow208
Writing & Content Creation190
Marketing & Growth179
AI Infrastructure & MLOps150
Design & Creative156
Photography & Imaging146
Data & Analytics115
Voice & Speech123
Education & Learning120
Customer Support112
Sales & Outreach115
Research & Analysis86
More than 560 Google employees signed a letter asking Alphabet not to let its AI tools be used for classified military operations.
In short: Tech workers are increasingly speaking up about how their employers’ AI tools are used, especially for military and surveillance work.
More than 560 Google employees, coordinated by researchers at Google DeepMind, signed a letter urging Alphabet’s leadership not to allow the company’s AI tools to be used for classified military operations. “Classified” means work that the public cannot see or review. The employees said their closeness to the technology creates a responsibility to prevent harmful uses.
This comes after earlier employee protests at big tech firms about government and military contracts. Workers at Google, Amazon, and Microsoft have raised concerns about whether their products were used by the Israeli military during the Gaza war. In 2024, Google fired 50 employees who protested the company’s cloud computing services being sold to Israel.
A separate dispute has also highlighted the tension between tech companies and the US military. Anthropic, an AI research lab, rejected a revised Pentagon contract in February because it feared the technology could be misused for mass surveillance (watching lots of people at once) or weapons that can act without a human deciding each strike. The Pentagon then labeled Anthropic a “supply-chain risk,” which could limit its government business, and Anthropic is contesting that decision in court.
At the same time, Google extended a $200 million contract with the Pentagon to cover classified operations. The company said it was proud to support national security.
More employee letters, protests, and legal fights are likely as AI spreads into sensitive areas like defense, policing, and intelligence. A key question is whether companies will allow open debate inside the workplace, or rely more on strict rules like non-disclosure agreements (contracts that limit what workers can say).
Source: Financial Times