Activity
Mon
Wed
Fri
Sun
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
Mar
Apr
What is this?
Less
More

Memberships

AI Profit Boardroom

2.9k members • $59/m

Clief Notes

23.4k members • Free

Chase AI Community

58.1k members • Free

Yieldschool

114 members • Free

The Blueprint Training

4.7k members • Free

5 contributions to Clief Notes
ICM paper published to pre print today!
For those of you who are in academia and would like to cite any of my research. I just pushed my ICM paper and had it submitted to preprint. You'll notice that the working title is called ICM but I refer to the workspaces MWP, I am pushing an update right now removing all mention of MWP but Model workspace protocol was the original name I had called it. However I think Interpretable Context Methodology makes more sense for what is actually happening. Enjoy! https://arxiv.org/abs/2603.16021
ICM paper published to pre print today!
0 likes • 30d
Amazing stuff. As everyone else has said. I'm probably going to have to read it 15 times to truly get it and then have Notebook LM read it to me a few more. I really feel like this methodology makes the most sense to me from a logic standpoint. It seems like the clearest path to build actionable, manageable, and understandable workflows. Thank you for sharing this with us and building out a community to help us work with this and understand it.
Security and Open Source Tools and Projects
So I'm just curious how everyone is handling security and review around some of these tools that are becoming available unbelievably quickly. I'm sure everyone here knows about all the drama around Open Claw and the fake project and all the mayhem that Created. @David Vogel just posted a pretty great tool in a Comment called "Claude Code Users, You're Wasting Tokens." When I reviewed it, it definitely looks amazing. It's a huge project and it's been around for a little while. Some of these viral tools concern me because of the possibility of just one errant line of code that would allow for the unseen prompt injection. When you happen to trigger it, who knows what it gives access to. I myself spend a lot of time as a consultant in the corporate implementation and also teach as an adjunct professor at a small college here locally. I just completed some SecAI+ "Teach the Teacher" training so this is also top of mind right now. They don't really have any specifics either other than you must do your due diligence so I'm just curious. At this time I'm pretty careful to avoid brand new tools and tools that have instant virality because it just seems like there hasn't been enough time yet for that to be validated. I haven't found a tool that I believe in yet to do it and I don't have the time, or in a lot of cases the knowledge, to do it myself. That's it. I love everybody's input. I feel like this is a critical topic in this space and it can be often overlooked.
0 likes • 30d
@Robert Seltzer currently I do something like this. Never recommend tools, skills, plugins, repos, or packages without first verifying they exist and are accessible. Specifically: 1. Always include a direct link to the actual source (not just a curated list reference) so you can review it independently. 2. Only recommend repos with significant community trust (thousands of stars, established history). 3. Flag anything with very low stars or suspiciously rapid viral growth (like OpenClaw) as a potential risk. 4. Security and injection concerns are a priority. And since I have called it out I get a lot more than that. I get an assessment of the recommendations. Claude, of course, takes it upon itself to evaluate each of them at a high level, of course not actual code validation or review, and gives me its "Opinions" Of which appear to carry the least risk based on the level of participation and some of those markers of community trust but that is not a guarantee.
1 like • 30d
I attached a PDF of what one of my reports gives me and I certainly don't deserve any real credit for this. I've just given it some basic rules and considerations and of course the AI runs with it from there but there's a lot of good points in this. I don't want to post it in the original post to seem negative but there are some really good considerations, like the hooks that it installs.
Claude Code Users: You're wasting tokens!
Just stumbled on the resource 87K stars on Github. The performance optimization system for AI agent harnesses. From an Anthropic hackathon winner. "Not just configs. A complete system: skills, instincts, memory optimization, continuous learning, security scanning, and research-first development. Production-ready agents, hooks, commands, rules, and MCP configurations evolved over 10+ months of intensive daily use building real products. Works across Claude Code, Codex, Cowork, and other AI agent harnesses." -from the repo https://github.com/affaan-m/everything-claude-code Cheers
1 like • Mar 23
So I'm just curious how everyone here is vetting some of these projects because of the risks of prompt injection and malicious code, especially with these open source projects. It's really risky as someone who's dealing with Sec AI and corporate implementations. I'm always concerned about pulling some random project in, even when it looks amazing, without vetting it. I'm just wondering if anyone has a process they're using to do that. This absolutely looks awesome. I want to clarify that.
🏁 Foundations 1.1 Check-In
You just finished the setup lesson. Vote below so we know where everyone is starting from. If you picked "Something else," drop a comment and tell us what.
Poll
362 members have voted
3 likes • Mar 15
I currently have a number of tools, including a 48 GB local LLM computer currently running QEN3.5 35b. I'm hoping to leverage the folders in my various scenarios to guide my project to the correct LLM for each project and increase my organization and flexibility. I also have Claude Pro, Google AI Pro.
YOU ASKED WE DELIVERED. New Structure for EVERYTHING.
This is a long update but PLEASE read through it, this will get you updated on everything Jake and I have been building. We realized we needed structure, easy access and more content. Every lesson links to the lessons around it. Every module builds on the one before it. You can start anywhere that makes sense for you, but everything connects back to everything else. It's the same architecture we teach you to build for your own workflows. Where to Start The Foundation is the starting point. The concepts. The folder architecture. The prompting framework. If you haven't done this, do this first. Everything else assumes you have. Implementation Playbooks (Level 2) is where you use what you learned. Each module is a complete build guide for a specific domain. Building Animations. The Ultimate Browser. Pick the one that matches your work. Finish with something real. Building Your Stack (Level 3) is where you build the tools. Custom UIs. Remote access. Infrastructure that wraps around your workflows. You're not adapting to someone else's setup anymore. You're assembling your own. They're abstraction layers. Each one builds on the last. How Lessons Work Now Every lesson follows the same structure. What you'll get, the content, resources, cross-links, and at the bottom: a discussion post with a poll. The polls are there for a reason. We want to know where you are. We want you talking to each other. When you finish a lesson, scroll down, vote, and drop a comment. If you're stuck, say so. If something clicked, share what it was. The community gets better when you use it. New Categories We reorganized the community posts. Here's where things live now: Announcements - Updates from us. New content drops. Changes to the course. General Discussion - Conversation that doesn't fit elsewhere. Questions, ideas, whatever's on your mind. Show Your Work - Post what you built. Animations, automations, folder setups, custom tools. This is where the community challenges live. Share your stuff.
2 likes • Mar 15
I cant wait to dig in and get my head around the concepts so they can use them to further my solutions and potentially offerings to clients.
1-5 of 5
Mark Saunders
2
8points to level up
@mark-saunders-2489
Learning

Active 4d ago
Joined Mar 15, 2026
Powered by