Anyone played with Andrej Karpathy's "LLM Wiki" idea from the gist he dropped?
Quick version in case you missed it: instead of using RAG to re-chunk your sources every time you ask a question, you compile each source once into a persistent markdown wiki. The LLM extracts concepts, writes entity and concept pages, updates cross-references, flags contradictions, and maintains the whole thing. Future queries read the pre-synthesized wiki.
The part that clicked for me: the reason most of us abandon our second brains is that backlink and cross-reference upkeep is boring. The LLM doesn't care. It's happy to touch fifteen pages in one pass.
I spent a couple of weeks turning Karpathy's pattern into a Claude Code plugin that actually scales (atomic pages, sharded indexes, BM25 fallback past ~300 pages). It also runs in Codex, Cursor, Gemini CLI, Pi, and OpenClaw through the skills CLI.
Install in Claude Code:
/plugin marketplace add praneybehl/llm-wiki-plugin
/plugin install llm-wiki@llm-wiki
Or in any other supported agent:
npx skills add praneybehl/llm-wiki-plugin -a <your-agent>
Five slash commands (init, ingest, query, lint, stats), stdlib-only Python, no dependencies. Plays well with Obsidian if you want the graph view.
Curious if anyone here has tried the pattern themselves. What did you ingest first, and what broke before it worked?
9
20 comments
Praney Behl
5
Anyone played with Andrej Karpathy's "LLM Wiki" idea from the gist he dropped?
Clief Notes
skool.com/cliefnotes
Jake Van Clief, giving you the Cliff notes on the new AI age.
Leaderboard (30-day)
Powered by