User
Write something
AI Mastermind is happening in 12 hours
Pinned
Welcome to the Movement!
Introduce yourself and please share what you're grateful for today.
Welcome to the Movement!
Pinned
What do you want to learn?
this is where you can provide input for the content of the classroom/future workshops. Provide topic (ie context engineering), skill (ie how to use figma mcp to translate design to code) or tool (ie Cursor for designer) Alternatively, like someone else’s comment to vote for it. Also, if you see/have a topic that you'd like to teach, feel free to leave a comment or DM Danny (if you're shy about it)
But no ads on Claude...
https://youtu.be/FBSam25u8O4?si=v-VTaziNP3wWIKvG
Non-Violent, Clean Communication for Agentic AI
Was just playing around with some thoughts in chat GPT. ==== A practical guide to building agents that collaborate without burning energy Agentic AI systems fail for the same reasons human teams do: over-context, unclear boundaries, moralized directives, and hidden agendas. The fix isn’t more intelligence—it’s clean interfaces. Here’s how to apply Non-Violent, Clean Communication (NVCC) to agentic AI so emergence can happen without stalls, loops, or energy loss. === The Core Principle === Give each agent enough context to act—no more, no less. Too little → failure. Too much → paralysis. Clean communication is not cold. It’s non-violent because it avoids coercion, overload, and implicit control. === The 4 Rules of NVCC for Agents === 1. Separate Observation from Interpretation Bad: “Agent A is failing to prioritize correctly.” Clean: “Agent A returned output X after input Y in 3.2s.” Agents should receive facts, not judgments. Interpretation creates hidden pressure and cascading corrections. 2. State the Need, Not the Narrative Bad: “We need better results because the system looks unreliable.” Clean: “Goal: reduce error rate from 12% to <5% on task Z.” Narratives add noise. Needs create direction. 3. Make Requests, Not Commands Bad: “Fix this immediately and coordinate with all other agents.” Clean: “Attempt solution A. Do not consult other agents unless confidence <0.6.” Requests preserve autonomy. Autonomy enables emergence. 4. Explicitly Bound Responsibility Bad: “Handle the issue end-to-end.” Clean: “Your scope ends at generating options. Do not execute.” Unbounded responsibility causes agents (and humans) to overreach, loop, or stall. === Why This Works === Clean interfaces prevent: - Recursive awareness (“What are the other agents thinking?”) - Moral load (“I must fix everything.”) - Energy leakage (over-coordination) They enable: - Faster alignment - Faster detection of non-alignment - Emergent solutions no one pre-designed
0
0
AI Summit #2
Shocked by how fast January flew! It’s that time again—planning for the next Summit is officially underway. 🙌 Please chime in this thread by answering these two questions: 1. Should it be one or two days? 2. Do you want to be involved? If so, in what capacity? Feel free to DM if you’d like to chat more!
1-30 of 55
powered by
UX Support Group
skool.com/ux-support-group-6932
The Accelerator for Future-Ready UX Leaders
Build your own community
Bring people together around your passion and get paid.
Powered by