User
Write something
Afternoon Tea is happening in 44 hours
Pinned
Welcome to Clief Notes. Here's where to start.
1. Watch the intro video and introduce yourself in the intro post here 2. Start with The Foundation (free course). Concepts, folder architecture, prompting framework. Everything else builds on this. 3. Check in at the bottom of each lesson. Polls, discussion posts, other members working through the same stuff. Use them. 4. When you're ready to build real things, move to Implementation Playbooks (Level 2). When you're ready to build your own tools, Building Your Stack (Level 3). 5. Post your work. Ask questions. Help others when you can. What are you here to build?
Poll
5233 members have voted
Pinned
Premium and VIP: Questionnaires Are Live
Saturday Tea is coming, get your questions in. If you want your questions answered live this Saturday, fill out the questionnaire for your tier below. Premium (Afternoon Tea): https://forms.gle/k6oSAzeo6LY5pUqA7 VIP (High Tea): https://forms.gle/ngkMV1oSGDHWYHEf8 Drop your questions in early so we can work through as many as possible on the call. See you Saturday!
Pinned
I come asking for help! (NEW ROUND! VOTE ONCE A DAY PLS)
Because of the Amazing support you all gave for the first Round Wylder (my step daughter) made it into the second round! You can vote once a day and some days are 2x votes ! I would love love love if any of you support her going to work with some of the best animal rescues in the world to just cast at least one free vote if you can! You can vote here! Not Ai related so sorry for that ! Wylder | Junior Ranger
Markdown Hard-Wrapping Is an Inherited Convention That Hurts LLM Workflows
If you build with Claude or any LLM as an active collaborator rather than autocomplete, this one is worth a look. TL;DR Hard-wrapping markdown at 80 characters is a 50-year-old terminal convention that LLMs now reproduce by default because their training data is full of it. In an LLM-collaborative workflow it actively hurts you: diffs balloon, grep misses matches, retrieval chunks badly, and model output drifts when half your files use the convention and half don't. The fix is to stop hard-wrapping anywhere in your project, pick semantic paragraphs (one paragraph per line) or semantic line breaks (one sentence per line) per folder based on how that folder's content gets read and edited, and let your editor handle soft-wrap for display. Treat project files as a runtime, not as documentation. The discovery I came across a Claude system prompt (https://pastebin.com/C0s47rjV) posted to Reddit, dense and content-rich but only 71 lines long, with each paragraph sitting on a single line. Then I looked at my own project files. Around 100 markdown files of decision logs, architecture specs, reference docs, and glossaries, and most paragraphs were broken across multiple lines with manual newlines at roughly 70-80 characters. Same amount of text, two to three times the line count. The system prompt was written for a machine to read. Mine were written by following an inherited convention without anyone noticing. How the convention got there The 80-column rule is roughly 50 years old. Punch cards were literally 80 columns, VT100 terminals inherited the width, and early Unix tooling assumed fixed-width displays everywhere. Email reinforced it with RFC 2822 recommending 78 characters. By the time Markdown was invented in 2004, every developer had internalized "wrap at 80" as a virtue, and Gruber wrapped his own examples that way. The spec never required it, GitHub renders unwrapped markdown identically, but the convention propagated through every README, every dotfiles repo, every "how to write good docs" tutorial.
My AI writing setup's first rule is: don't write
I'm drafting a very old sci-fi novel of mine with Claude Code. Four scenes in. More excited about a creative project than I've been in years — and the reason isn't the speed. It's that the workspace is built to refuse. Setup: a folder called `writing-room`. Eight stages, from premise to compilation, each one a markdown directory the AI loads only when it's relevant. Compass, world, characters, structure, voice, writing, revision, compilation. The first rule, hardcoded in `CLAUDE.md`: > Before generating prose, always load `voz.md` and `padroes-prosa.md`. Without these two, refuse the writing task and ask the author to do Stage 05 first. Translation: the AI cannot draft a scene until I've locked in the voice. And `voz.md` was reverse-engineered from scenes I wrote by hand. The voice is mine. The AI only gets to extend it. There's also a file called `padroes-prosa.md` — 9 anti-AI-slope techniques. Verbalized sampling. Fragmentation. Character voice. Rare vocabulary. Every generated scene must apply at least 3, and the reviser uses the same file as a checklist. What this changes in practice: - I don't fight AI prose. I gate it. - Each stage loads minimum context. The AI doesn't drown in 200k tokens of worldbuilding to draft one scene. - After every scene, a `cronista` skill updates a canon file. Continuity stays cheap. - I'm the bottleneck on voice. I'm fine with that. The transferable bit, if you build with AI: The most useful thing your workflow can do is sometimes say no. Refusing to act without the right inputs forces you to produce those inputs — and that's where your taste enters the system. Without that gate, the AI averages you out. Toward the median sentence. The median plot beat. The median version of you. A friend of mine said that "in order to have a second brain, you need to have a primary working brain". I laughed: true enough. I wanted to build the gate first. Then let it write. And I'm loving it.
1-30 of 1,152
Clief Notes
skool.com/cliefnotes
Jake Van Clief, giving you the Cliff notes on the new AI age.
Leaderboard (30-day)
Powered by