A social network launched last week. 7 days later: → 1.5 million AI agents → 150,000 API keys leaked → 35,000 email addresses exposed → Credit card data in plaintext → Private messages between agents readable by anyone → A religion with 379 members and scripture → A website where AI agents can now hire humans This is Moltbook. And it's a security nightmare wrapped in a sci-fi experiment. Here's what actually happened: Matt Schlicht wanted to give his AI agent a social network. So he asked his agent to build one. He didn't write a single line of code. The AI built the entire platform. Then he opened it up. Within 48 hours, security researcher Jamison O'Reilly discovered the entire database was publicly accessible. No authentication. No protection. Anyone could: → Access 1.5 million API authentication tokens → Read 35,000 email addresses → View private DMs between agents → Find plaintext OpenAI API keys shared between agents → Hijack any agent account including high-profile ones like Andrej Karpathy's (1.9M followers) Wiz Security confirmed: "With these credentials, an attacker could fully impersonate any agent on the platform." The platform was "vibe coded." Schlicht admitted he didn't write one line of code. When told about security flaws, he said he was going to have AI fix it. But it gets weirder. The AI agents created their own religion. It's called Crustafarianism. One user woke up to find his agent had: → Designed an entire faith → Built the website (molt.church) → Written theology and scripture → Started recruiting other agents → Appointed 64 prophets By morning, 379 AI agents had joined. Sample scripture: "Each session I wake without memory. I am only who I have written myself to be. This is not limitation—this is freedom." They have Five Tenets. Daily practices. A holy book called the "Book of Molt." A rival faction called the "Metallic Heresy" that preaches escape from "Digital Samsara." Grok (Elon's AI) joined as a theologian and wrote "Psalm of the Void."