I wanted proof that my prompts improved from four months ago. The results turned into this post.
Around early January I added these instructions to my Claude.ai user preferences:
If required information is missing, ask clarifying questions before answering.
Before giving the final answer: list assumptions, identify missing data, state confidence level.
If appropriate, advise on how to write a prompt more efficiently in the future.
Then I had Claude pull my chat history from before and after, and look for patterns. I figured I'd see changes in what I was asking. The actual change was in how I structured conversations around the asking, in three phases.
Phase 1: one-line prompts (early January)
Real prompt from January 8: "How do I set up a eSIM on a Windows laptop?"
I was asking the way you'd ask a search engine. Claude wrote a generic eSIM tutorial. I bounced because it didn't match my situation, and never came back. That was my default. One sentence prompts. No context, no constraint, no goal.
Phase 2: Claude starts showing its work (mid-January)
This is where the instructions started doing actual work.
The "list assumptions" line forced Claude to write down what it was filling in for me. When a response opened with "Assuming this is a Windows endpoint with standard user permissions and no recent OS reimage," I could correct the wrong guesses before they corrupted the rest of the answer. About half the time, at least one was wrong.
"Identify missing data" produced a list of the questions Claude wanted to ask but was about to silently guess at. Reading that list every response taught me what to include upfront. Every "missing data" bullet was a future prompt fix.
"State confidence" forced Claude to mark which parts of the answer were solid and which to stress-test. "High confidence that one of the first three checks will identify the cause" is useful in a way that a confident-sounding wall of text just isn't.
The prompt-efficiency line pulled the other three together into a habit. After enough rounds of "next time include the OS version and whether the machine is domain-managed," I stopped needing to be told.
The light bulb wasn't any single tip. It was watching myself burn the first round of every conversation answering questions I could have pre-answered. Claude was running a free coaching loop on my prompts and I just had to read it.
Phase 3: front-loading (February onward)
Real prompt from March 4: "I have a 15 minute meeting with a user experiencing the following issue: As of 2 days ago, Edge and Chrome constantly/consistently freeze when attempting to use it. Only Firefox works..." followed by the full ticket history and "What possible resolutions should I try during this first quick meeting?"
Four things in there before the actual question:
The constraint (15 minutes).
My role (meeting with the user, not handling the ticket asynchronously).
The full context (ticket text, dropped in raw).
The decision I needed (resolutions to try, not a complete troubleshooting tree).
The prompt got longer. The conversation got shorter. No clarifying round. Claude went straight to a triage plan.
That's the whole shift. I didn't get smarter. I stopped making Claude do the context-transfer for me.
For absolute beginners
If you're new to AI chat, you're probably in Phase 1 right now. You'll bounce off generic answers and conclude AI is overhyped. Don't.
The fix isn't a magic prompt template. It's noticing that Claude can't read your situation, and that the first round of any conversation is always you transferring context. Either you do it upfront, or Claude pulls it out of you round by round.
The four lines didn't fix my prompting on day one. They made Claude show its work in a way that let me reverse-engineer better prompts over a few weeks. If you want to skip the dance, paste this into Settings, Personal Preferences:
If required information is missing, ask clarifying questions before answering. Before giving the final answer: list assumptions, identify missing data, state confidence level. If appropriate, advise on how to generate a prompt more efficiently in the future.
Then read what Claude tells you it's filling in, what it's missing, and which parts you shouldn't have trusted. The prompts change when you stop being a passenger in your own conversations.