Anthropic walked back some of their recent mess ups.
1. Claude Code had real quality issues, but the API was not affected. Anthropic says the problems hit Claude Code, Claude Agent SDK, and Claude Cowork, not the base API or inference layer. 2. Three separate changes caused the problems. The issues came from: lowering Claude Code’s default reasoning effort from high to medium, a caching bug that repeatedly dropped older reasoning from sessions, a system prompt change meant to make Claude less verbose. 3. The reasoning-effort change made Claude feel less smart. Anthropic changed the default to reduce latency and token usage, but users preferred stronger reasoning by default. They reverted it on April 7. 4. The caching bug made Claude forgetful and repetitive. After an idle session, Claude was supposed to clear old reasoning once. Instead, it kept clearing reasoning every turn, which made it lose context about its own decisions. 5. The anti-verbosity prompt hurt coding quality. Anthropic added a prompt telling Claude to keep text between tool calls very short and final answers brief. Later testing showed it reduced performance, so they reverted it on April 20. 6. The combined effect looked worse than any single bug. Because the three issues affected different users, models, and timelines, it looked like a broad, inconsistent decline in Claude quality. 7. Anthropic is resetting usage limits for subscribers. As of April 23, they say they are resetting usage limits for all subscribers. 8. They are changing their release process. Anthropic says it will make more internal staff use the exact public build, broaden evals, add stricter prompt review, use more gradual rollouts, and add soak periods for changes that could reduce intelligence. Bottom line: Claude Code did not get worse because the model itself was intentionally degraded. Anthropic says it was a chain of product-layer and prompt-layer mistakes that made Claude seem less capable, less consistent, and more forgetful. https://www.anthropic.com/engineering/april-23-postmortem