329
Audio & Video Production330
Automation & Workflow218
Software Development248
Marketing & Growth203
AI Infrastructure & MLOps153
Writing & Content Creation204
Data & Analytics128
Customer Support130
Design & Creative154
Sales & Outreach124
Photography & Imaging143
Voice & Speech132
Operations & Admin91
Education & Learning122
Anthropic temporarily suspended Peter Steinberger’s Claude account, then restored it hours later after his post on X drew attention.
In short: Anthropic temporarily suspended OpenClaw creator Peter Steinberger’s account for “suspicious” activity, then reinstated it a few hours later.
Peter Steinberger, the creator of OpenClaw, said on X that Anthropic suspended his account and warned that it may get harder to keep OpenClaw working with Anthropic’s Claude models. He shared a screenshot of a message from Anthropic that mentioned “suspicious” activity.
The suspension did not last long. After the post spread widely, Steinberger said Anthropic restored his access within hours. An Anthropic engineer also replied publicly, saying the company has never banned anyone for using OpenClaw and offered to help.
This incident came shortly after Anthropic changed how it charges OpenClaw users. Anthropic said a Claude subscription will no longer cover “third-party harnesses including OpenClaw.” A “harness” here is a tool that helps connect an AI model to other software and automate longer tasks (like a power strip that lets you plug in many devices at once). OpenClaw users now have to pay separately based on usage through Claude’s API, which is the paid doorway software uses to talk to Claude.
This highlights a common tension in AI tools: when a popular AI service changes prices or access rules, people who rely on add-on tools can suddenly face higher costs or service disruptions. For everyday users, it can mean an app that “just worked” last week may start costing more, or may be less reliable, even if they did nothing different.
Source: TechCrunch AI