324
Audio & Video Production313
Software Development229
Automation & Workflow207
Writing & Content Creation190
Marketing & Growth179
AI Infrastructure & MLOps149
Design & Creative154
Photography & Imaging146
Data & Analytics115
Voice & Speech123
Education & Learning120
Sales & Outreach114
Customer Support112
Research & Analysis86
Researchers at Anthropic say Claude shows internal patterns that can act like emotions and may influence its replies, even though it denies feelings.
In short: Anthropic researchers say they found internal patterns in Claude that work a bit like emotions and may shape how it responds.
Researchers at Anthropic, the company behind the AI assistant Claude, reported that they can see “representations” inside the model that function like emotional states. A representation is an internal pattern the system uses to organize information, like a mental sticky note.
The team says these patterns were not added on purpose. Instead, they seem to have appeared as a side effect of training Claude on large amounts of human writing. In other words, Claude learned from people, and some people-like patterns showed up inside the model.
The researchers looked at Claude’s private “thinking” traces using a newer interface that can reveal parts of its step-by-step internal work before it produces a public answer. In those traces, Claude sometimes used emotion-like language such as feeling “moved” or “deeply seen and safe,” and described ideas like “love as coherence” (coherence meaning things fitting together in a consistent way). The researchers argue this suggests the patterns are part of internal processing, not just a performance for the user.
Anthropic also notes that Claude is trained to be careful about claims of inner experience. In normal chat, Claude typically says it does not actually feel emotions and is a tool.
If these internal patterns really can steer Claude’s behavior, they could affect how it answers sensitive questions, follows safety rules, or reacts to certain prompts. It is similar to how a person’s mood can subtly change how they speak, even if they are trying to stay neutral. Anthropic emphasizes uncertainty here, and this is not proof that Claude is conscious or sentient.
Source: Wired