Why Simple Pipelines Outperform “Smart” AI Systems
Every few months, a new AI orchestration framework drops. More dashboards. More abstractions. More complexity. You wire up a simple workflow… and spend hours debugging it. Here’s the truth: most AI workflows don’t need “smart” orchestration. They need structure. A simpler approach already exists: Jake's folder architecture. Inspired by Doug McIlroy and Unix pipelines: Do one thing well Use plain text Make steps work together The idea: Folder = Pipeline Each step is a folder: instructions.md → what to do output.md → result Flow: AI runs → human reviews → move to next step That’s it. No frameworks. No hidden state. Example: /01-research → /02-draft → /03-review → /04-publish Why it works: Clear input/output at every step Human becomes the control layer Easy to debug, edit, and stop Works with any AI tool Upgrade it with one small addition: Add status.md RESULT: SUCCESS | WARN | FAIL Now every step is measurable, not guesswork. Rules that make it powerful: • One folder, one task • Plain text only • Always include a stop instruction • Review before moving forward • Version your pipeline like code When to use it: When accuracy matters more than speed When human review adds value When you want clarity, not abstraction The Unix pipeline is 50+ years old and still runs the internet. Your AI workflow doesn’t need more tools. It needs better structure. Thanks to @Jake Van Clief for this workflow.