Activity
Mon
Wed
Fri
Sun
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
Mar
Apr
May
What is this?
Less
More

Memberships

Clief Notes

27.4k members • Free

106 contributions to Clief Notes
Dave & Jake's Picks
We've been hoarding links like digital pack rats -- and it's time to crack the vault open. Jake and I put together a running list of the tools, resources, and random goldmines we keep coming back to. The stuff that actually stuck after the hype wore off. If it survived our workflows, it earned a spot here. https://www.skool.com/quantum-quill-lyceum-1116/classroom/c7f102c7?md=59285d6b92ed425cae7f439761e26acf ------------------------------------------------------------------- WHAT THIS IS Think of it as a curated toolbox -- not a "Top 100 AI Tools" listicle from some SEO farm. These are things we've actually used, broken, duct-taped back together, and kept reaching for. Some are well-known, some are buried gems we stumbled on at 2am while chasing a rabbit hole. WHAT WE NEED FROM YOU This page is alive. It's not a monument -- it's a workbench. - Drop a comment if something on here saved you hours (or cost you hours -- we want to know that too) - Suggest additions -- what's in YOUR toolchest that we're sleeping on? - Call us out -- if something's outdated, broken, or just not as good as the alternative you found, tell us - Share your use case -- same tool hits different in different hands. How are you actually using these? We'll keep updating this as the collective stack evolves. Your feedback shapes what stays, what goes, and what gets added next. ------------------------------------------------------------- Disclaimer: You know the drill -- this is garage tinkering, not production gospel. Your mileage may vary. Duct-tape what works and break what doesn't. Let's keep building brains that can't be taken away from us.
Dave & Jake's Picks
1 like • 2h
Has PMM survived your workflow?
0 likes • 1h
@David Vogel lol thanks. I pinged Jake about a Lyceum membership giveaway to drive installs and bug hunting. Was thinking of throwing in a Claude max subscription as one of the gifts / prizes. Is that allowed according to the community rules?
Where do you think AI will be in 6 months?
Everything is changing so fast. As a Millenial having lived through the introduction of PCs (loved my Apple LC II), dial-up internet, iPhones, now AI, can't believe how much has changed. Even AI, in december I remember people being skeptical of AI ads, then 4-5 months later it seems like if you don't do AI ads you're gonna be left behind. At the same time, while I love what I'm able to do with AI, worried about the environmental impacts and wonder if those will catch up to us before we figure things out. For my industry definitely seeing a need to upskill quickly to stay relevant, probably even by the end of the year. Wonder what others are thinking?
3 likes • 12h
There’s the sexy story of AI and then there’s the reality on the ground with respect to training, engineering and architecture. Fundamentally I don’t think the way we work will change, but below the visible surface, everyone will try to squeeze more performance out of every training dollar by getting creative. We see this with Microsoft research and Claude experimenting with agentic memory (in addition to pushing hard on the context window size and k-v cache agenda) and Google with the RNN models for memory and the academic research around RLM (recurring language models). The academic research has been exploring creative ways to make AI smarter without having to solely rely on retraining new models. What I gather will happen is that the market will probably become more immune to news of new models (it’s somewhat a vanity metric and fad now), and we’ll probably pay attention only when new models bring monumental changes and capabilities. Most of this will be necessitated by cost bounded by the limits of how much hardware, power and water is available in the training supppy chain to create bigger and more complex models. Kinda like moore’s law applied to AI.
When To Share?
Debating whether to share a project for initial feedback, it's a math game called MathMines, currently on Lovable. Wondering: -How do you know when a version 1 project you're working on is ready/safe to be shared in a community like this one? -What are your security check boxes? -Are there any best practices or protocols one should follow when first launching a project? Still trying to figure out how i could implement Jake's folder system for Version 2, gonna jump into Claude code for that one.
2 likes • 1d
1. First time startup founders concern themselves with secrecy. They should think more about execution. 2. Second time founders think about distribution, not so much the secrecy. 3. Seasoned founders think of ecosystems. Ship the minimum viable product you have out of the full featured one you have developed (or are developing) for free. It’s the best way to test product market fit. Fixes and feature builds are an iterative process. Feedback and initial traction is more valuable than a polished product. Full featured one can always ship behind a paywall or subscription, on a backlog in your distribution channels. Build your ecosystem asynchronously.
1 like • 1d
@Siv Darmalingum yes, build and collect feedback at the same time. If it’s a math game you built, you could share it with friends and family and ask them for their feedback.
Any Success with Local LLM Setups?
Hi everyone! With the new CoPilot plan restructuring and limiting AI resources by making it more expensive, I have been wanting to get my feet wet with a free local LLM setup. Has anyone succeeded in creating a free local LLM setup that rivals or gets close to the reasoning and speed of Codex, CoPilot, Claude, etc? Please share you free setups: IDE, model, GPU and ram rig. Would like to hear if there have been successes and what failures you have experienced?
2 likes • 1d
You might want to look into opencode and open router (the LLM provider). It’s decently good and most models cost less than what their providers charge for. Better alternative than running a local model (even with a 32GB m4 you’re stretching the limits on performance that’s at best 1/10 of what frontier models like sonnet and opus can do. I use local models for repetitive, task worker stuff. I plan with opus or sonnet (or sometimes Gemini pro or deep seek) and then the work gets executed by smaller models some running locally. But these usually are not coding tasks. Usually I rely on sonnet or haiku for this. Thing about local models is you need to think about tradeoffs. What can you run reasonably well on a smaller local model that would save you time and tokens on your primary subscriptions? Since I spread the cost of running these tasks across different models and providers, I probably save a tonne compared to driving everything through a single model or provider. Here’s what I’m running: - MacBook Pro M2 8GB - Raspberry pi 3 4gb - Intel Xeon titanium 48 cores 64gb and RTX 5060 8GB. - Claude code CLI - Opencode CLI - Vs Code and GH copilot - Open Router, Vertex AI, and Model Ark - And PMM, my memory layer (this ties all my setups together with a single unified memory) synced across all my machines: https://github.com/NominexHQ/pmm-harness-dist
why does it feel like i keep ending up in the same place?
came back after a week offline and spent an hour just reading posts. people are shipping. full agent teams, named specialists, live clients. and i'm still on the foundation. but here's the honest version of what's actually happening. i build something. it works. then i look at it and realise it's not what i actually wanted. so i break it and start again. then halfway through the rebuild, a new idea comes in and now i don't know if i should finish what i started or pivot to the thing that's clearly better. i've rebuilt the same system three times in a month. and the worst part.. each version was better than the last. so was the rebuilding wrong? or is that just what building actually looks like before it locks in? genuinely asking because i can't tell if this is a me problem or if everyone here is quietly doing the same thing and just posting the final version.
0 likes • 1d
What you see in public builds is mostly the tip of the iceberg. Not the polished product but not the entire raw emotional rollercoaster that went into the builds either. You’re on the right track. I’m currently building what looks like a shipped product but I’m way behind in my backlog and have had to scrap some tests I needed to run probably 6-7 times in the last two weeks. I had to restart these tests each time, and I can’t make meaningful changes or new features until they are done. Building is iterative. It’s not always on the up and up. Some days you stumble. Some days you’re smooth sailing. It’s not always leaps and bounds. Improvements are almost always incremental. How you know you’re learning = you’re encountering friction and handling one issue at a time. Eventually you’ve cleared the wolds and when you look back, you’re all done.
1-10 of 106
Millenial Cat
5
29points to level up
@millenial-cat-4349
sometimes i over-engineer

Active 17m ago
Joined Mar 24, 2026
ENTJ
Powered by