The model is rented. Your workspace is yours.
Coupling your workflow to one model is a single point of failure.
Pricing shifts. Features get deprecated. A better model lands somewhere else.
If your data, your protocols, and your methods all live inside one vendor's runtime, every shift becomes a multi-week refactor.
The fix isn't picking the "right" model. It's building so the question doesn't matter.
———————————————————————————————————————————————
Three layers, three different jobs pull your stack apart and look at what's actually coupled.
Data and protocols. Markdown files, JSON, plain folders.
This layer should already be model-agnostic.
If you're storing notes, briefs, plans, and memory inside a vendor's chat history, you don't have a workspace, you have a session.
Tooling. Python scripts, CLIs, MCP(Model Context Protocol) servers. Also agnostic by default.
The trap is hardcoded paths and runtime assumptions baked into shell commands.
Lift those into env vars. Now any worker can run them.
Agent invocation. This is the only layer that's genuinely model-coupled. Skills, hooks, slash commands, dispatch syntax don't pretend it isn't. Accept the coupling here, then make it swappable.
What you actually want consistent:
Not the model. The behaviour around it.
  1. Voice rules. Same copy standard regardless of who's drafting.
  2. Response format. Same skim-friendly structure across every backend.
  3. Methodologies. Brainstorm, brief, handoff, audit. Same shapes.
  4. Memory access. One protocol, one source of truth, every model reads it identically.
  5. Output contract. Same final message shape so the same audit gate can verify it.
That's the consistency that matters. The reasoning style of Opus vs GPT-5.5 vs Gemini stays distinct on purpose. That's why you have three.
The shared layer
Pull the always-on stuff into a .shared/preferences/ folder. Voice rules, response format, anything load-bearing on every interaction. One source. Generate CLAUDE.md, AGENTS.md, GEMINI.md from it.
Edit the fragment, regenerate the instruction files.
Methodologies live next door as on-demand docs.
Brainstorming, breakout briefs, handoff writing. Plain markdown, any model reads them.
The runtime decides how they get injected (auto-skill, dropdown, paste).
Memory becomes an MCP server, not a prompt.
Every model queries it through the same tool call. No vendor-specific bootstrap tax.
The audit gate is the consistency enforcer different models finish tasks differently.
That's fine.
What makes a multi-executor architecture trustworthy is a single quality bar verifying every output.
One gate, applied identically, regardless of who claimed "done."
Without that, model-agnosticism is just inconsistency with extra steps.
What this buys you
  • A model gets deprecated tomorrow. Swap the executor. Workflows survive.
  • A better model lands. Slot it in as a new backend. Data layer doesn't move.
  • Onboard a contractor on a public repo. They never see your private orchestration layer.
  • The seat changes. The product doesn't.
The model is rented. Your workspace is yours. Build accordingly.
// A<3
7
9 comments
Ari Evergreen
6
The model is rented. Your workspace is yours.
Clief Notes
skool.com/quantum-quill-lyceum-1116
Jake Van Clief, giving you the Cliff notes on the new AI age.
Leaderboard (30-day)
Powered by