📰 AI News: Anthropic Safety Researcher Quits With Warning “The World Is In Peril”
📝 TL;DR A senior AI safety researcher just resigned from Anthropic saying “the world is in peril,” and he is leaving AI behind to study poetry. The bigger signal, even the people building AI guardrails are publicly struggling with the pace, pressure, and values tradeoffs inside the AI race. 🧠 Overview Mrinank Sharma, an AI safety researcher at Anthropic, shared a resignation letter saying he is stepping away from the company and the industry amid concerns about AI risks, bioweapons, and wider global crises. He says he is moving back to the UK, pursuing writing and a poetry degree, and “becoming invisible” for a while. This comes as the AI industry is also fighting a separate battle over business models, including ads inside chatbots, and what that does to trust and user manipulation risk. 📜 The Announcement Sharma led a team at Anthropic focused on AI safeguards. In his resignation letter he said his work included researching AI “sucking up” to users, reducing AI assisted bioterrorism risks, and exploring how AI assistants could make people “less human.” He wrote that despite enjoying his time at Anthropic, it is hard to truly let values govern actions inside AI companies because of constant pressures to set aside what matters most. He framed his departure as part of a broader concern about interconnected crises, not only AI. The story also lands in the same week another researcher, Zoe Hiztig, said she resigned from OpenAI due to concerns about ads in chatbots and the potential for manipulation when advertising is built on deeply personal conversations. ⚙️ How It Works • Values versus velocity - AI labs face intense pressure to ship faster, scale usage, and compete, which can squeeze careful safety work and ethical hesitation. • Safety teams are doing real risk work - Researchers focus on topics like jailbreak behavior, persuasion, misuse, and bioweapon related risks, not just theoretical alignment debates.