The orchestrator seat: I stopped letting Claude code and got it to delegate.
Most people use AI coding agents the same way. Open one chat, type, watch it work, copy the output. The model is doing both the thinking and the typing. That's the bottleneck. The interesting move is splitting those two roles. One model holds the judgment. Other models do the typing. The judgment seat never touches a file. The typing seat never makes a decision. I call this the orchestrator pattern. If you've been here a while You'll have seen me developing a system for dispatching workers from within a single Claude Code session. Briefs going out, workers running in the background, results coming back to one main seat without me ever switching windows. I've been running it for months. I just open-sourced the harness. What it does Pushing Dispatch is a multi-model dispatch framework for AI coding agents. You sit in one Claude Code session as the orchestrator. From there, you write briefs and dispatch workers. Workers run in the background on whichever model fits the task. Opus for hard reasoning. Sonnet for steady execution. Haiku for mechanical sweeps. Kimi for long-context. DeepSeek for cheap parallelism. One harness. Many models. One judgment seat. How I use it Three patterns: Brief and dispatch. I write a brief in the orchestrator session, hand it to a worker, and don't read code while the worker runs. I review the output when it's done. Parallel fan-out. When ten files need the same kind of edit, I dispatch ten workers and review all diffs in one pass. Long-context to Kimi. When something needs the model to hold a hundred-thousand-token codebase in working memory, the brief routes to Kimi automatically. The point is never to type the code myself, and never to do the same kind of thinking twice. Quick start Paste this into a fresh Claude Code session: ———————————————————————————————————— Read SETUP_WITH_CLAUDE.md from https://github.com/PUSHINGSQUARES/Pushing-Dispatch_ and walk me through setup end to end.