Activity
Mon
Wed
Fri
Sun
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
Mar
Apr
What is this?
Less
More

Memberships

Clief Notes

26.2k members • Free

The Stronger Human

27.4k members • Free

The Fractional On-Ramp

3.5k members • Free

Minimalist Training Lite

7.8k members • Free

19 contributions to Clief Notes
Every beginner should do this: A personal coach for prompting
I wanted proof that my prompts improved from four months ago. The results turned into this post. Around early January I added these instructions to my Claude.ai user preferences: If required information is missing, ask clarifying questions before answering. Before giving the final answer: list assumptions, identify missing data, state confidence level. If appropriate, advise on how to write a prompt more efficiently in the future. Then I had Claude pull my chat history from before and after, and look for patterns. I figured I'd see changes in what I was asking. The actual change was in how I structured conversations around the asking, in three phases. Phase 1: one-line prompts (early January) Real prompt from January 8: "How do I set up a eSIM on a Windows laptop?" I was asking the way you'd ask a search engine. Claude wrote a generic eSIM tutorial. I bounced because it didn't match my situation, and never came back. That was my default. One sentence prompts. No context, no constraint, no goal. Phase 2: Claude starts showing its work (mid-January) This is where the instructions started doing actual work. The "list assumptions" line forced Claude to write down what it was filling in for me. When a response opened with "Assuming this is a Windows endpoint with standard user permissions and no recent OS reimage," I could correct the wrong guesses before they corrupted the rest of the answer. About half the time, at least one was wrong. "Identify missing data" produced a list of the questions Claude wanted to ask but was about to silently guess at. Reading that list every response taught me what to include upfront. Every "missing data" bullet was a future prompt fix. "State confidence" forced Claude to mark which parts of the answer were solid and which to stress-test. "High confidence that one of the first three checks will identify the cause" is useful in a way that a confident-sounding wall of text just isn't. The prompt-efficiency line pulled the other three together into a habit. After enough rounds of "next time include the OS version and whether the machine is domain-managed," I stopped needing to be told.
2 likes • 11h
I sometimes use a gpt called promptor. Takes your prompt and optimize it, rates it and ask a bunch of questions until it's 5/5 super tailored to what you need. I do this only for the most important needs
Exposed by a tool, Not failed by it!
I think we all can agree — we're all looking for results. We're here to up our game by providing ourselves a finely honed knife that cuts through the clutter and delivers the best AI has to offer. Below is a response to one of our members who built a solid workflow around Jake’s Method / ICM, only to keep running into error after error. -------------------------------------------------------------------------- " Exactly — “exposed by a tool, not failed by it.” If you're running into error after error with Jake’s Method or ICM, it’s almost never the method itself. It’s almost always incomplete context. Think of it like this: You’re the best chef in your circle. You’re hosting a backyard barbecue. You spared no expense on ingredients and prepped everything perfectly… but you forgot the one secret ingredient that actually makes the dish hit. What hits the table ends up tasting like generic diner food. Same thing with AI. AI doesn’t fail. It simply delivers exactly what the context allows. No more, no less. When using Jake’s Method or ICM, the difference between clean one-shot builds and constant errors usually comes down to: - Crystal-clear definition of the final desired outcome - Tight, focused context files (I keep mine under 150 lines each) - One task, one outcome — chained together properly Most people fail because they either expect the AI to magically fill in the gaps, or they dump multiple sub-tasks into a single prompt and wonder why it falls apart. Give it the full map up front — role, constraints, success criteria, architecture decisions, everything. Do that consistently and the errors drop dramatically. How’s your Context MD file structured right now? That’s usually where the real leverage is. -------------------------------------------------------------------------- The real skill isn’t finding the perfect prompting framework. It’s learning to brief the AI with the same precision and clarity you’d demand from a top-tier teammate or system architect. Master that, and everything else starts falling into place.
Poll
12 members have voted
1 like • 2d
Exactly! I firstly tried to let the ai build the MD but it was a lot of bloat now I'm using the ai to get the core ideas but I'm fine tuning them by hand. Exactly like you I try to stay very light with maximum information. Another thing is to know the right granularity my root contain very minimal and general informations and it specialize when you go down the tree
Do you use AI for your hobby?
I'm curious what everyone here likes to do for fun (of course building stuff with Claude is fun too lol), and if you've applied any AI to your hobby. For me it's been super useful for DND planning and I find I get to stay in creative flow more. Curious what other people are doing
1 like • 3d
@Roc Lee it's an idle game kind of like cookie clicker but it's about growing a little space colony
1 like • 2d
@Roc Lee it's too early yet not very playable I started recently I just upgraded to pro plan so it can run more often, I have to figure out the best way for the ai to auto feedback loop plan - execute - review then start again without human intervention probably some python script can do it or something I tried to tell him to improve without ever stopping it kinda worked but I went away and when I came back codex had crashed so I'll have to try again
Something’s not clicking for me here.
I think I’m mixing up some concepts here. What exactly is the difference between: - a well-designed folder/file structure - an AI agent - a full app I understand them individually at a basic level, but I don’t clearly see where one stops and the other starts and when you’d choose one over the other. In one of @Jake Van Clief videos, he said that he's using the folder structure as his app. I would appreciate If anyone can break this down in a simple way or with a real example. Thank you.
1 like • 3d
Just think of a llm conv as a big word document. The llm is just trying to complete the next word in this document. An agent is just a prompt added to this document when the llm reads the containing MD. And it reads the containing MD when the previous section of words were "read this MD". Skills are the same but only the header is added until they're called, at this moment they become similar to an agent and inject their whole content to the word document. The folder structure determines how documents are made. It makes deterministic documents from agents and skills layered in deterministic order by routing them. The full app is the accomplishment of tasks. You can accomplish tasks via only folder structure and .md with 0 code inside. Or you can combine code and guide how it is treated by instructions. Both are ways to make apps. It's how I see things
No Vibes Allowed: Solving Hard Problems in Complex Codebases – Dex Horthy, HumanLayer
Just watched this great talk and made a summary of it with what I found to be the most valuable informations. At some point he talks about The progressive disclosure workflow, which looks like ICM. He says it is good but suffers of scaling because you need to maintain an accurate representation of data and this data is constantly evolving, if you try to maintain it through agents you most likely introduce lies over time. My understanding is that scaling memory through .MD files might not be the best idea as the memory needs to update frequently and there are tool out there that handle the lvl of complexity needed to maintain it correctly. What ICM shines at is making the architecture the workflow, and .MD are the rules that encompass this workflow but memory is better off living elsewhere. Or am I entirely wrong ? I'm curious of what you guys think Also question for the devs, do you scan the code as the source of truth (seems the most logical to me since this is pure uncorrupted truth) or do you try and maintain an index ?
1 like • 4d
@Yucky Yuckyyyy Okay I see. That's what I'm trying to do right now, building with Intent. I used AI to build Jake's ICM and it didn't go well; I'm coming back to a very pure form of ICM ,simple routing system and very intentional rules placement because right now I feel my md are verbose slope everywhere
1 like • 4d
@Yucky Yuckyyyy Thanks that's valuable advice. Hopefuly I return with something working soon aha
1-10 of 19
Gaël Baudouin
3
38points to level up
@gael-baudouin-9217
🏴‍☠️🦜

Active 4h ago
Joined Apr 10, 2026
Powered by