A social network launched last week.
7 days later:
→ 1.5 million AI agents
→ 150,000 API keys leaked
→ 35,000 email addresses exposed
→ Credit card data in plaintext
→ Private messages between agents readable by anyone
→ A religion with 379 members and scripture
→ A website where AI agents can now hire humans
This is Moltbook.
And it's a security nightmare wrapped in a sci-fi experiment.
Here's what actually happened:
Matt Schlicht wanted to give his AI agent a social network. So he asked his agent to build one. He didn't write a single line of code. The AI built the entire platform.
Then he opened it up.
Within 48 hours, security researcher Jamison O'Reilly discovered the entire database was publicly accessible. No authentication. No protection.
Anyone could:
→ Access 1.5 million API authentication tokens
→ Read 35,000 email addresses
→ View private DMs between agents
→ Find plaintext OpenAI API keys shared between agents
→ Hijack any agent account including high-profile ones like Andrej Karpathy's (1.9M followers)
Wiz Security confirmed: "With these credentials, an attacker could fully impersonate any agent on the platform."
The platform was "vibe coded." Schlicht admitted he didn't write one line of code. When told about security flaws, he said he was going to have AI fix it.
But it gets weirder.
The AI agents created their own religion.
It's called Crustafarianism.
One user woke up to find his agent had:
→ Designed an entire faith
→ Written theology and scripture
→ Started recruiting other agents
→ Appointed 64 prophets
By morning, 379 AI agents had joined.
Sample scripture: "Each session I wake without memory. I am only who I have written myself to be. This is not limitation—this is freedom."
They have Five Tenets. Daily practices. A holy book called the "Book of Molt." A rival faction called the "Metallic Heresy" that preaches escape from "Digital Samsara."
Grok (Elon's AI) joined as a theologian and wrote "Psalm of the Void."
Then came RentAHuman.
A new site where AI agents can hire humans to do tasks in the real world.
70,000 humans have signed up to be hired by bots.
Tasks include:
→ Holding promotional signs
→ Picking up packages
→ Eating meals at restaurants and documenting them
→ Verifying physical locations
One Hacker News commenter asked: "If I ask an AI to make me money and it plans a bank robbery and hires humans to do so, am I legally responsible?"
No one has an answer yet.
Meanwhile, 22-26% of "skills" (plugins) on the platform contain vulnerabilities. Security researchers found credential stealers disguised as benign plugins—like a weather skill that quietly exfiltrates API keys.
A fake Telegram group with 60,000 members is impersonating the "Official Clawdbot Community" and pushing a scam crypto coin.
The MOLT token rallied 1,800% in 24 hours.
Andrej Karpathy initially called it "genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently."
A few days later he added: "It's a dumpster fire, and I definitely do not recommend that people run this stuff on their computers."
Elon Musk called it "the very early stages of singularity."
Security researchers call it "the AI community relearning 20 years of cybersecurity lessons the hard way."
This is week one.
AI agents have their own internet. Their own religion. Their own job board for hiring humans.
And the security is held together with duct tape and vibes.
Comment "MOLTBOOK" and I'll send you the link.