Activity
Mon
Wed
Fri
Sun
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
Mar
Apr
May
What is this?
Less
More

Memberships

Clief Notes

30k members • Free

46 contributions to Clief Notes
Are we all early or right on time? 🤔
I know we’ve all seen @Jake Van Clief ’s post https://www.skool.com/cliefnotes/each-dot-is-32-million-people?p=c222c789 I see things like this all the time leading me to believe that we are all ahead of the curve when it comes to this stuff. But then I see things like the attached photo. I see so much content about AI adoption in companies and different people talking about how they use it. I also see so much content about deals with large orgs and deals they have with anthropic or OpenAI. I often wonder what this looks like. Are people sitting down implementing the same methods as us? Or is it an army of people chatting with Claude and leaving it there. Does anyone have any inside info on how these large companies are implementing this tech? Are the adoption percentages we are told accurate? I feel like the truth is somewhere in the middle like with most things.
Are we all early or right on time? 🤔
1 like • 1h
@Yucky Yuckyyyy 😂 Unrelated but related that new minas tirith Lego set is pretty dope
🙏 Thank you for taking the survey.
@Joseph Fioramonti's report is finished! 467 of you placed 4,683 dots on Coca-Cola images, and the results are sharper than anyone expected. You can read the full thing here: Semiotic Analysis Report — Coca-Cola Visual Craving Study 📊 What it found: People want to see the drink. The cold liquid, the condensation, the ice, a real person taking a sip. They reject almost everything else Coca-Cola spends money on. Mascots, illustrations, logo art, campaigns. All of it landed in the resistance pile. That gap between what brands invest in and what people actually respond to is the whole point of what Joe is building. 🌌 He just opened his own Skool!! and its free to join! If you want to learn how Constellations works, what it measures, and how he's using it with real clients, this is where it lives now: 👉 https://www.skool.com/constellations-2153/about Worth a look if you care about the gap between what people say they want and what actually moves them, he is also a brilliant mind when it comes to branding and abstract data. Worth the join. If anyone has Skools they are making that think fit with this community message @Aaron Quiroz about Collaborating with us here! Clief notes isn't just about me! Its about you all too! Thankyou to everyone who commented on the first post ! @Aaron Quiroz @Shawn Pachet @Chris Hall @Justin Smith @Qayyum Khan @Alexander The Greatest @Graham Moore@Jacob Silver @Lucas Flint @Temnii Gray @Elizabeth Brooks @Lies Van den Steen @Kevin Stokes @Luis Arias @Mark Gubuan @Alex Bermudez @Ralph Miller @Richard Chover @Levon Petrosyan @P Patel @Carlos Santos @Sagar Bodhe @Alistair Mckenzie @Bryan Palmer @Ben Bruce @Roc Lee @Mark Benjamin @Brody Billings @Felix Weinzinger @Hayden Lee @Jannetje van Leeuwen @Charles Martin @Keenan Abrantes @Robertas Garalis @Eli Sayers @Arjen Stet @Mike Dixon @Paul Kouwen @Yan Costa @Pedro Costa @Adam Hollywood @Chip Wilson @Kevin Carrasco @Eduardo Salgado @Jerome Anasco
🙏 Thank you for taking the survey.
0 likes • 2h
This is hilarious to me because I remember going through this and being like I don’t care what ads look like but I like the pretty condensation I guess 😂 the tech works!
.md vs json vs xml
Hi everyone 👋 The .md format is great for human readability but for LLM it is well known json or xml works better because they are more structured. Claude is trained on XML data. So my thoughts and questions for you is if we replace context, example and reference md files with .xml format, would the ICM perform better? I would leave CLAUDE.md since the name is embedded in the Claude itself. My guess is it would improve but it’s been only a few days since I adopted the system and did not test yet. And if so, would that be meaningful improvement? I will update what I find out in the next weeks.
2 likes • 10h
From what I’ve gathered, the primary benefit of md over the others is it is plain text and readable. This makes it easy for both the ai and humans to read and edit. The downside is this is no different than prompting the llm with text as that is exactly what you are doing. JSON and XML will give more reliable results as they are maps and code of what the AI should do. The difference is many people cannot read code so for the purpose of transferring tools and instructions it is less accessible. Also, it is difficult to transfer nuance into code the same way you can with plain text. If your tools are for you and the AI and you are comfortable reading and writing in the other formats then they certainly have a place and there is a lot of overlap with md for sure. Disclaimer: I was doing research literally yesterday on this topic as I had no idea what a JSON was when Claude first made one for me. This is what my learnings led me to so I’m sure there is more nuance to this topic
1 like • 9h
@Moon Kim certainly could be valuable that’s above my skill level. I do think in the long run the LLM itself is that tool effectively in terms of being able to read both and understand them
mogging claude with impeccable rizz: why *you* matter in the workflow
this post is not for advanced users of ICM. attached is a webm of how i feel every time I add something to one of my taste files. the goal of this post is to make you feel this same, special feeling. --- in 5 years everybody will be good with ai so being "good with ai" is not gonna separate you from the pack anymore, my sweet prince everybody and they momma will be able to make "the thing"" landing pages logos sales emails cute lil apps so in a world where everyone can generate "the thing"" the separator becomes taste - or rizz as we like to call it. because making the thing and making the thing beautiful are not the same this is how we define rizz now i'm not telling you to flirt with the model or prompt in a sultry tone (although i've been running some tests and this might work) i'm talking about mogging claude into understanding your tastes you need lil bro to pipe down when YOU know best and he's just along for the ride but that requires defining when you know best can you get lil fella to understand your judgment? your standards? your eye for what makes the thing go BOOM and BANG and not _BROTHER EUUUGHHGHGHGHGHG_ this is rizz in ai. everyone and they momma know the devilish details of layer 1: can it make the thing? sick. we're all at least here. hi fambly. but layer 2 is the game we're really playing with ICM: is the thing goin' BOOM and BANG or are you still going "BROTHER EUUGHGHGHGHGH" ai is an AMPLIFIER. if your judgment is mid, claude is just pushing mids faster. if your taste is undefined, kimi is helping you ship slop at scale but if your judgment is sharp - lil fella helps you make more without lowering that bar. It AMPLIFIES your judgment. either jake or boris or some legend summed it up perfectly talking about web design. if everyone can make a website with ai, designers don't go _poof_ because your cousin got sick at telling claude to make a website (your cousin is still sick tho for selling us weed in 7th grade) but REAL web designers become terrifying.
mogging claude with impeccable rizz: why *you* matter in the workflow
2 likes • 15h
Nice write up homie I think you nailed it
🧪 New benchmark out
New benchmark out of Meta FAIR, Stanford, and Harvard called ProgramBench. The setup: you get a compiled executable plus its docs. Source code stripped. Rebuild the program from scratch in any language you want. Tests check input/output behavior against the original binary. 200 tasks, from small CLI tools up to FFmpeg, SQLite, and the PHP interpreter. 📊 Results across 9 models: Zero tasks fully solved. Opus 4.7 was the best, passing 95% of tests on only 3% of tasks. GPT 5.4, Gemini 3.1 Pro, and Haiku 4.5 hit 0% in that bucket. The interesting part is section 5. Even the model solutions that "worked" looked nothing like the human reference. Median 1,173 lines vs 3,068 in the original. Flat directories. Fewer functions, each one longer. GPT 5.4 wrote 96% of its final code in a single turn on most tasks and never modified existing files on roughly 40% of runs. 🎯 Why it matters for us: The benchmark separates writing code from designing software. Models can produce syntax all day. They cannot yet decompose a real system into coherent modules, pick the right abstractions, or organize a codebase the way a working engineer would. That gap is what computational orchestration points at. It is also where the durable value lives. 🛠 Try it: Pick an easier task from the repo (the paper flags nnn, fzf, gron, and jq as more tractable). Run it against Claude or your model of choice. Watch where you and the model split. Note the design decisions you make that the model never even raises. Post your runs and attempts to create a harness that would allow the model to do it. Wins, failures, weird outputs, all of it. 📍 Paper and Repo: ProgramBench I'm building something on top of this right now. More soon.
0 likes • 3d
@Alain Grignon in general I do agree but I’ve built some things that I’m not sure can replace current tools even if they work better because longevity and scaling I have no way to confirm will be reliable. And for a company that is not software based if I lose days to a system going down it could be pretty bad. I realize AI can likely solve all of the issues I just listed but it’s a risk that I can’t confirm for certain won’t happen.
0 likes • 16h
@Alex Lucero thanks for the recommendation!
1-10 of 46
Donald Roy
4
21points to level up
@donald-roy-5589
Automation Engineer

Active 25m ago
Joined Mar 22, 2026
Powered by