News: Anthropic confirms three bugs degraded Claude Code [Fixed]
Anthropic on Wednesday published a detailed post-mortem acknowledging that three separate product-layer changes caused weeks of quality degradation in Claude Code, its flagship AI coding tool, vindicating developers who had spent weeks insisting the product had gotten worse. The company traced the issues to a reasoning effort downgrade, a caching bug that wiped session memory on every turn, and an overly aggressive system prompt designed to curb verbosity. All three changes were shipped independently between early March and mid-April, but their overlap in time created what appeared to users as broad, inconsistent performance decay. The first change landed on March 4, when Anthropic lowered Claude Code's default reasoning effort from high to medium to reduce latency. The company's internal evaluations suggested the trade-off would yield only "slightly lower intelligence," but users noticed immediately. Anthropic reverted the change on April 7, calling it "the wrong tradeoff" in the post-mortem. The second issue was a caching bug introduced on March 26. An optimization intended to clear old reasoning blocks once upon session resume instead triggered on every turn, effectively erasing Claude's working memory for the remainder of any affected session. The bug also caused faster-than-expected usage limit depletion, as each request became a cache miss. Anthropic acknowledged the bug "made it past multiple human and automated code reviews, as well as unit tests, end-to-end tests, automated verification, and dogfooding." It was fixed on April 10. The third regression arrived on April 16, when Anthropic added a system prompt instructing Claude to keep responses between tool calls to 25 words or fewer — a response to the chattiness of the newly launched Opus 4.7 model. Internal testing later showed the instruction caused a roughly 3% drop on coding evaluations. It was reverted on April 20. Compensation and New Controls As remediation, Anthropic is resetting usage limits for all subscribers and rolling out several process changes: broader internal dogfooding on public builds, enhanced per-model evaluation suites, stricter system prompt auditing, soak periods for any change that could affect intelligence, and gradual rollouts. The company also launched a dedicated @ClaudeDevs account on X to provide more transparency around product decisions.