Anthropic is the only AI company claiming to have its AI bot (Claude) coding itself 100% for the last 1-2 months. It needs no human intervention. This is likely the reason why it is now outcompeting the big AI players. it will vastly outpace all others and begin taking over everything. As Dean and the Team posted yesterday, the Anthropic head of AI safety, Mrinank Sharma, resigned 2 days ago stating “the world is in peril”.
There will be a massive uptick in AI growth now. AI coding itself 100% is a tipping point few have acknowledged. This is where AI does not need humanity anymore.
How do we make this constructive when even the head of AI safety at Anthropic is not optimistic? By doing what Tony’/Dean’s team does best: facing the problem and staying constructive.
The way I stay optimistic is by focusing on ideas to solve the problem we face. I’m not being alarmist to post a real life hazard: I’m being realistic. As an analogy, if someone yells that the house is on fire and it’s not then this is alarmism. But if someone yells the house is on fire and it truly is on fire then this is heroism. Our AI house is on fire so please let us post constructively about it.
Does anyone else have ideas on how to solve the AI alignment problem now that Claude is coding itself 100% without humans? Or should we encourage Tony and Dean to adopt ideas like the Universal Logic and use their incredible reputation to get it to the CEO of Anthropic as soon as possible? It is an idea powerful enough to save our lives if Mrinank Sharma is correct that our AI house is on fire and “the world is in peril”. I think he is right and he is a hero for blowing the lid off of toxic positivity.