My process usually starts as if I’m talking to the owner of a development company.
First, we do a system brief. Outcomes, constraints, what the system must achieve. That brief usually goes through one, two, sometimes three iterations until it’s clean.
From there, I move into a senior-level architectural discussion between me and the AI. We go back and forth, refining ideas, pressure-testing assumptions, and turning rough concepts into structured thinking. Out of that comes real documentation often 10 to 20 documents in total. Some short, some long. System briefs, architecture outlines, infrastructure designs, domain models, boundaries, ingestion and validation flows, human-in-the-loop reviews if AI is involved, and so on.
A key part of this is canon. I always define canonical documents: a system brief canon, sometimes a domain canon, sometimes something more specific. These are single sources of truth. The AI understands what “canonical” means, and when things drift, I can always pull it back with “read the canon first.”All of this happens before I touch an IDE.
With human developers, this usually happens implicitly they ask the right questions and create the documentation themselves. AI doesn’t do that. It has no long-term memory. So I create the structure for it. That way, when it starts to hallucinate or wander, I can anchor it instantly instead of re-explaining everything from scratch.
Only after that foundation is in place do I move into the IDE and let the agent work. The documentation isn’t overhead it’s the control system.
So… what do the rest of you do?
Or
Am I just being overly fucking anal?
One thing I forgot to mention: once I move into the IDE—Visual Studio Code, Antigravity, or both that’s where prompting really matters.
I don’t just start prompting and hoping for the best. Before any code happens, I create what I call System Specification Documents. I catalogue them and explicitly require the agent to read every single one in detail.
To make sure it hasn’t just skimmed them, I ask the agent to explain the project back to me the purpose, constraints, goals, and boundaries. No coding is allowed at this stage. No output. Just comprehension. If it misunderstood something, or if I missed something, we fix that immediately.
Only once I’m confident the agent actually understands the system do I move on.
At that point, I step out of the IDE entirely. I go back to tools like ChatGPT, Claude, and Gemini often all three at once. I play them off against each other and ask a simple question:What’s the best prompt to start with?
Sometimes the answer is a minimal bootstrap prompt. Sometimes it’s a carefully constrained, multi-step instruction set. Either way, the prompt itself is designed outside the IDE first.
I’m also very visual. I prefer starting with frontend and UX before backend logic. I want to experience what the system feels like before wiring up what’s happening behind the scenes. That probably makes me a typical “vibes-first” builder, I don’t see machine code in my head; I see interaction and flow.
Because of that, I spend a lot of time iterating on prompts before they ever hit the agent. I call this meta-prompting: asking the AI how it wants to be instructed, what level of detail it needs, how tasks should be sequenced, and how constraints should be expressed. We go back and forth until the instructions are clear, explicit, and hard to misinterpret.
Only then do I drop the prompt into the agent and let it work.
Curious how others handle this phase especially how much effort you put into prompt design before the IDE versus inside it.