User
Write something
High Tea is happening in 4 days
Ari’s Space: Your 24/7 AMAAi Hotline
Welcome to Ari’s Space. Think of this as your always-open support thread inside Clief Notes. If you have a question, feel stuck, need direction, want feedback, or just need a little clarity, drop it here. This space is for: - Questions about Clief Notes or ICM - Help applying what you’re learning - Feedback on your ideas, content, offers, or next steps - Accountability nudges - “Am I thinking about this the right way?” moments - Anything you’d usually wish you could ask me directly No question is too small. If it matters enough for you to ask, it belongs here. I’ll be checking in regularly and answering as much as I can. Use this thread like an AMA that never closes. Drop your question below whenever you need support with. Welcome to Ari’s Space. <3
Council of 5
I am known to take Claude outputs and put them into ChatGPT for blind-spot checks. Today I decided to create a "council of 5" skill that runs any question, problem, solution, document, etc., through 5 distinct personalities, with 3 rounds of discussion, then a consensus. 1. The professor: peer-reviewed/cited sources only 2. The teacher: logical, wonders, "is this the right question to be asking" 3. The founder: can this be done, and what is the fastest way 4. The outside: zero context, thinks outside of the box 5. The contrarian: hunts for the fatal flaw in everything. Sharing the skill here if it could help anyone.
Weirdness Engine
As a product designer, I see that AI pushes design towards the middle of average. Even Claude Design produces some pretty generic outputs. I realized that what we need in order to make things feel more creative is to introduce more weirdness (https://github.com/sethjenks/weirdness-engine). In order to do that well and produce outputs that are not alienating I had to define the right level of weirdness. I did some research and created a “Weirdness Engine” to produce more interesting software designs. I would love it if you all tried it out and provided critical feedback. Let me know what you make with it.
Weirdness Engine
Anthropic ships Claude design. OpenAI ships pets.
Whatever model you're using right now is good enough. The question isn't capability anymore. It's taste. Capability has been commoditizing for eighteen months. The benchmarks plateaued in the territory where the difference stops mattering for most work. The model is no longer the lever. Watch what the labs are shipping right now and notice the same thing from two directions. Anthropic shipped Claude design. Identity, typography, layout, voice, the editorial spine the whole product runs on. The brand has a point of view and they're letting it carry the surface. OpenAI shipped pets. Floating overlay. /pet command. Custom personality presets. The brand is leaning into character, presence, attachment. Don't read these against each other. Read them together. Both labs are reaching for the same lever at the same time, in different registers. Both are admitting taste is now load-bearing. Two flavors of the same lever Editorial taste fits a power-user surface. Rigorous. Stable. A design system signals reliability. Character taste fits a wider surface. Warm. Present. Pets signal companionship. Neither is "better." They're aimed at different rooms. Picking which room you're in, and refusing to be a generic version of all rooms, is the work. What this means for the rest of us If the labs are now competing on taste, the same thing is happening one layer down. To everyone using them. When AI gives you all the tools, your taste is the differentiator. To some extent. Craft, distribution, relationships still matter. But the lever that just rotated for the labs is rotating for the rest of us too. The model can't tell you what to make. Your judgment about what to do with all of it can. The takeaway The model is good enough now. The next leverage point isn't more capability. It's the judgment to use it well. Taste is the lever. For them. For us. Full breakdown. The good-enough plateau, the two registers of taste, and what it means for makers, all live here: https://aris-space.com/documents/thoughts-and-scribbles/the-taste-transition
Stop tuning the model. The harness rewrites itself.
jcode boots Claude Opus in 14 ms, runs at 27.8 MB, and edits its own Rust source mid-session. Same model inside. I believe this is the start of a sub-genre nobody is naming yet. Quick context: The first era of LLMs was prompt engineering. Era two was context engineering. Era three is what we've all been arguing about all year, model picking, Opus 4.7 vs Gemini 3 Pro vs Kimi K2.6. Now, someone rebuilt the wrapper around Claude in Rust, and the gap on cold start, RAM, and per-session scaling is bigger than any model gap I've seen this year. What jcode actually is: - Coding agent harness, 94% Rust - Custom terminal called handterm, custom Rust mermaid renderer - Native logins for Claude, OpenAI, Gemini, Copilot, Azure, plus aggregator providers - MCP works out of the box. Falls back to your existing .claude/mcp.json so anything you've already wired up keeps running - Install via brew, curl, or cargo build. Single binary - 3.3k stars on GitHub, 57 releases, actively maintained The numbers (jcode vs Claude Code, from the README): - Cold start: 14 ms vs 590 ms to 3.4 seconds. 42 to 245 times faster - Idle RAM with local embedding off: 27.8 MB - 10 sessions in parallel: 260 MB total vs 334 MB to 3.2 GB for Claude Code - Per added session: 9.9 MB vs 76 to 318 MB - Custom mermaid renderer the author claims is 1800x faster than browser-based versions That last one is the kind of detail that tells you what they're really doing. Someone is going through every layer of the Claude Code experience and rebuilding it natively, and the gains compound. What's actually different at the harness layer: - Self-dev mode. Agents inside jcode can edit the harness's own Rust source, run cargo build, hot-reload the binary across active sessions without dropping you. The wrapper is recursively modifiable from inside the agent loop. - Memory as semantic vectors per turn. Recall is automatic via cosine similarity. Not "remember to update CLAUDE.md". There's an ambient mode that consolidates memory and resolves conflicts in the background while you work.
1-30 of 109
Clief Notes
skool.com/cliefnotes
Jake Van Clief, giving you the Cliff notes on the new AI age.
Leaderboard (30-day)
Powered by