328
Audio & Video Production316
Software Development242
Automation & Workflow207
AI Infrastructure & MLOps148
Marketing & Growth187
Writing & Content Creation193
Data & Analytics121
Design & Creative149
Customer Support122
Photography & Imaging140
Voice & Speech132
Sales & Outreach112
Education & Learning121
Operations & Admin86
Anthropic is moving into a new London office with space for up to 800 people, as AI labs compete for UK talent and work with UK safety officials.
In short: Anthropic is moving into a much larger London office to grow its team and business in Europe.
Anthropic, the company behind the AI assistant Claude, has leased a new office in London as it plans a major expansion. The new space is about 158,000 square feet and is designed to hold up to 800 people.
Anthropic opened its first London office in 2023 and currently has around 200 staff there. The larger office gives it room to quadruple that number over time.
The office is in the same area as other big AI groups, including Google DeepMind, OpenAI, Meta, and several AI-focused startups and research organizations. That concentration matters because it can make hiring and collaboration easier, like putting many restaurants on one street so more diners and workers pass through.
Anthropic said it will also deepen its work with the UK’s AI Security Institute, a government body that tests AI systems for risks. This week, the institute published a risk evaluation of Anthropic’s latest model, Claude Mythos Preview. Anthropic has limited who can access that model, citing concerns it could be misused by cybercriminals (online attackers).
The expansion comes as Anthropic has been in a legal fight in the US linked to limits it wanted on how its AI could be used, including refusing uses tied to mass surveillance and autonomous weapons.
London is becoming a central place for AI companies to hire from British universities and turn research into products people and businesses use. Anthropic’s move adds to that competition and keeps the UK government closely involved in testing how powerful AI tools might be used or abused.
Source: Wired