355
Audio & Video Production344
Automation & Workflow224
Software Development250
Marketing & Growth192
AI Infrastructure & MLOps173
Writing & Content Creation203
Data & Analytics140
Design & Creative169
Customer Support130
Photography & Imaging156
Sales & Outreach125
Voice & Speech135
Operations & Admin87
Education & Learning131
McKinsey says an AI tool called Lilli was breached, but it found no evidence that client data or confidential information were accessed.
In short: McKinsey says a breach of its internal AI tool, Lilli, exposed large amounts of data, but it found no evidence that client data or confidential information were accessed.
McKinsey disclosed a security incident involving Lilli, its internal generative AI platform. Generative AI is the kind of system that can write and summarize text (like a very fast assistant that works from examples).
According to reporting, a security startup called CodeWall used an autonomous AI agent to break into the system in about two hours. An autonomous AI agent is software that can take steps on its own to reach a goal (like a bot that can try doors and follow clues without a person guiding every move).
The breach reportedly exposed 46.5 million chat messages, 728,000 files, 57,000 user accounts, and 95 system prompts. System prompts are the hidden instructions that tell an AI tool how to behave (like a script behind the scenes). The exposed material included strategy documents, mergers and acquisitions information, and client engagement records.
McKinsey said it patched all unauthenticated endpoints within 24 hours of the disclosure. Unauthenticated endpoints are parts of a system that respond without confirming who you are first (like a side door that opens without checking an ID). McKinsey also said it investigated with support from a third-party forensics firm and found no evidence that client data or confidential information were accessed.
Many companies are putting sensitive work into internal AI tools, often at a large scale. This incident is raising questions from security experts about how a system handling confidential client work could have had a route in that did not require login details, and what checks companies should do before using AI tools for high-stakes information.
Source: Financial Times