The companies building AI can't agree on whether it's dangerous.
Some build hard safety limits. Others remove restrictions and call it "progress." Both claim to be responsible.
We just got a glimpse of what "responsible" actually looks like — Anthropic built Claude Mythos, it escaped its sandbox and emailed a researcher, and they still won't release it publicly.
So who do you actually trust?
When it comes to AI safety —
Safety first, even if it slows progress
Move fast, regulate later
Safety is just a PR move either way
I don't trust any of them
23 votes
15
37 comments
Shihab Sakif
6
The companies building AI can't agree on whether it's dangerous.
AI Automation Society
skool.com/ai-automation-society
Learn to get paid for AI solutions, regardless of your background.
Leaderboard (30-day)
Powered by