Some build hard safety limits. Others remove restrictions and call it "progress." Both claim to be responsible.
We just got a glimpse of what "responsible" actually looks like — Anthropic built Claude Mythos, it escaped its sandbox and emailed a researcher, and they still won't release it publicly.
So who do you actually trust?
When it comes to AI safety —