Six weeks ago I was making Instagram graphics. Today I'm shipping public AI worker repos.
What ICM, 60-30-10, and a lot of GitHub stalking taught me.
Six weeks ago, I was using Claude to produce daily content artifacts — Instagram squares, captions, blog posts. A publishing operation with a workflow that mostly held together.
Today, four public ICM-structured AI repositories live under github.com/NFTYoginis.
Three more shipping this week.
Each one is a fork-able starter that demonstrates a working architecture: orchestrator dispatches, workers build, briefs serve as contracts, memory persists across sessions.
The path between those two states isn't "I learned to code." It's a six-week stretch of reading public repositories, trying patterns, deleting most of them, and slowly understanding what ICM (Internal Coherence Maximization, from Jake Van Clief) actually means when you stop treating it as theory.
This is the tour: where I started, what changed, what I built, and where you can fork it.
Where I started
Six weeks ago, my Claude workflow looked like this:
  • One Claude session per task. Each session loaded brand-voice files, content samples, and whatever else seemed relevant. Context bloated by lunch.
  • I'd ask Claude to do something. It would produce something close. I'd correct it. Repeat.
  • "Memory" was telling Claude "remember our convention is X" at session start, which it forgot the next session.
  • The token bill kept growing without the output growing proportionally.
That setup works at small scale. It collapses under any real production load.
The collapse moment, when it came, was specific. I caught one of my daily routines burning roughly 800,000 tokens — for a routine that needed to do one thing: write three dispatch briefs and hand them off.
The actual creative work happened in the workers being dispatched.
The orchestrator was just routing.
Eight hundred thousand tokens for routing.
That was the first time I read about ICM.
What ICM actually says (the part that mattered)
Most "AI architecture" content I'd been reading was either too high-level to act on, or too tied to a specific framework I'd have to adopt wholesale.
Jake Van Clief's ICM work hit a different note.
Two ideas landed.
One: AI is the 10%. A real workflow is three layers by share of value:
  • 60% Infrastructure — databases, file storage, routing, approval chains. Systems that already exist and shouldn't be replaced.
  • 30% Orchestration — templates, rules, decision logic. The connective tissue that makes raw tools useful for a specific context.
  • 10% AI — summarize, extract, generate, compare against a standard. What a model handles well.
AI is the smallest layer. The failure mode is treating it as the whole stack. My 800k-token routine was Claude doing 60% (parsing files), 30% (re-deriving routing logic every morning), AND 10% (dispatching workers) all in one session. Three layers, one bill.
Two: structure your worker like a contract, not a conversation. A brief specifies purpose, inputs, outputs, constraints, references, and verify steps. The worker reads it cold and executes. No "let me clarify…", no mid-flight correction. If the worker needs to ask, the brief is incomplete.
These weren't new ideas, exactly. They were old ideas applied to AI work with enough rigor to actually use. The difference between "I know about role boundaries" and "I have role boundaries enforced in my system."
The six weeks, in beats
Week 1. Read Jake Van Clief's ICM work. Read eleven publicly available Claude worker repos from the Skool community. Stared at them. Some were genuinely well-architected — clean briefs, real verify steps, durable READMEs that taught you the architecture before they sold you the tool. Others were a single CLAUDE.md and a vibe.
Week 2. Rewrote my project's CLAUDE.md three times. Realized one file was trying to do three jobs at once. Split into CLAUDE.md (entry — who you are), CONTEXT.md (routing — where to look for what), STATUS.md (state — what's active). Token bill dropped. Cognitive load dropped more.
Week 3. Built the first specialist — a decision-support worker for a domain I'd never touched before. Failed. Rebuilt. Failed differently. Started writing what I was learning into rules.md and identity.md instead of holding it all in my head between sessions.
Week 4. Built a meta-builder — a worker whose job is to produce ICM-structured specialists from a brief. Built it. Deleted half of it. Built it again. The second version held.
Week 5. Shipped the first two specialists publicly: a decision-support tool for bereaved families navigating online funeral services, and a deal-screener for first-time laundromat buyers. Two domains as far apart as I could pick. Same architecture. Same Pages-by-default landing. The pattern reproduced.
Week 6. Started building public worker-pattern starter repos — one for content, one for visual design, one for voice-to-video animation. Each demonstrates one role in the architecture. Anyone can fork.
What public repos taught me (the comparisons)
Reading other people's work mattered more than reading documentation.
A few repos clarified what I was doing right.
They had structured briefs, clear role boundaries between orchestrator and worker, ICM checklists that actually got run, READMEs that taught you the architecture before they sold you the tool. Looking at them, I'd think: we're playing in the same league.
Other repos showed where the bar was lower than I'd assumed. Single CLAUDE.md doing everything. No verify step. READMEs that said "drop this into Claude and ask it to do things" — which works for the maker, but doesn't scale to other people forking it. Looking at those, I'd think: we already passed this stage.
The comparison wasn't to feel superior; it was to ratify that we'd made decisions that mattered.
The most useful comparisons weren't the polished hubs. They were the ones almost where we were, with one or two choices made differently. Those let me see my own decisions clearly enough to ratify them or reverse them.
That comparison loop — what would I copy, what would I cut, what would I never do? — is the part you can't get from a paper or a course.
The hubs we built:
All MIT-licensed, fork-able, and self-hosting their own landing pages via GitHub Pages. The repo IS the documentation IS the marketing surface.
specialist-foundry — the meta-builder
Public name: "Gabe — Your AI Specialist."
Takes a domain brief, produces a folder-based ICM-structured specialist with all required files plus a Pages-ready landing page. Each build gets its own distinct visual identity per a strict per-build discipline. Built for operators (solo founders, agencies, internal-tools teams) who want their own AI specialist for a domain that doesn't have a polished one yet.
online-funeral-family-compass — bereavement decision support
A decision tool for US families navigating online funeral services in the first hours after a death. High-harm domain, fiction-vs-real distinction critical, bereavement-grade design. The hardest specialist to build well; the one that proved the meta-builder's pattern survives real audience constraints.
laundromat-deal-screener — acquisition decision support
A deal-screening tool for first-time SBA-financed laundromat buyers. Acquisition-ledger aesthetic, verdict trichotomy (PURSUE / PROCEED WITH CAUTION / PASS), six refusal gates with verbatim language. Different domain than the funeral compass — same architecture, same quality of build. Proves the pattern reproduces.
content-claude-starter — worker starter kit, content
The first of three worker-pattern starters. Demonstrates the ICM 3-md structure + dispatch-only orchestration + corrected 60-30-10 layer separation, applied to a content-generation worker. Fork, drop into a fresh Claude Project or Claude Code, edit the voice config, run.
design-claude-starter — worker starter kit, visual assets (shipping this week)
Same architecture as content-claude-starter; different worker role. Produces images, HTML previews, and social-share variants from a structured brief.
animation-claude-starter — worker starter kit, voice-to-video (shipping this week)
Remotion-based. Dispatched a script + visual direction, produces short MP4s. Operator brings their own footage source and voice clone.
What this is for (and what it's not):
These repos aren't products to subscribe to. They're starter architectures.
They're for the operator who's tried to make Claude useful for real work and felt the gap between "this is a clever tool" and "this is a system I can rely on." The architectures here close that gap. Not because they're clever — because they're disciplined.
If you fork one of these, you'll spend more time reading than running. That's the point. The architecture is the value; the code is just what holds the architecture in place.
What's next
I'm publishing a series on operator notes from a 14-month Claude build. Each piece unpacks one pattern these repos embody:
  • The orchestrator-worker role boundary, in practice
  • Why one CLAUDE.md isn't enough — the three-file pattern
  • When to spawn a subagent and when you're just paying twice
  • Cross-stage trace verification — the bug class that file-existence tests miss
  • The corrected 60-30-10 framework, audited across three workflow examples
  • File-based memory taxonomy — what to save and what NOT to save
  • Brief-as-contract dispatch
  • Edit-the-source feedback loop (ICM § 6.3)
  • A catalog of mistakes I made and what survived
  • Three experiments I built and deleted
Each goes deeper than this tour can. If the architecture interests you, the series unpacks it piece by piece.
For now: the hubs are live. Fork one. Build your own.
9
9 comments
Gabriel Azoulay
5
Six weeks ago I was making Instagram graphics. Today I'm shipping public AI worker repos.
Clief Notes
skool.com/cliefnotes
Jake Van Clief, giving you the Cliff notes on the new AI age.
Leaderboard (30-day)
Powered by