User
Write something
Pinned
Welcome to Digitally Demented. Here's what you walked into.
I'm Daniel Walters. 15+ years in operations and marketing technology -- the intersection where marketing, tech, and operations either connect or fall apart. I'm the person who sits between people who build things and people who use them. I translate in both directions. I'm not a developer. I'm AuDHD (late-diagnosed), which means I think in systems and frameworks whether I want to or not. I built a 19-agent AI system to run my consulting business, and I'll tell you straight when something doesn't work. That's not a warning -- it's a feature. A while back, something clicked for me: the doing isn't the work anymore. The thinking is the work. AI can draft your emails, research your competitors, analyze your data. That's not coming -- that's here. And most professionals I talk to are in one of three places: 1. Stuck. They know AI matters but don't know where to start. 2. Skeptical. They tried it, got mediocre results, and assumed AI was overhyped. 3. Spinning. They're using AI but starting from scratch every single time. If any of that sounds like you, you're in the right place. This community exists because I got tired of watching smart people feel dumb about AI. What's here: - AI 101 (Free Course) -- Start here. Fundamentals without jargon. Classroom tab. - Connected Intelligence: AI Fluency (Paid Course) -- 5 modules where you build your own cognitive architecture -- a working system for how you think and operate with AI. Every module produces a deliverable you keep. Details in the Classroom. - Community -- Questions, wins, frustrations, resources. The only rule is be real. What I ask: - Introduce yourself below. Who you are, what you do, what brought you here. Even one sentence. - Be direct. If something I post doesn't make sense or you disagree, say so. Honest conversation is how this place works. - Share your work. AI wins, failures, experiments. We learn more from the failures. Your first move: 1. Drop an intro in the comments 2. Check out AI 101 in the Classroom 3. Browse what others are talking about and jump in
Behind today's LinkedIn post: how to configure an AI to defer-and-challenge (5 patterns from my actual stack)
Posted on LinkedIn this morning (link) about the gap between domain knowledge and architecture. Short version: domain knowledge is the fuel, architecture is whether the engine turns. Two consultants with identical expertise can get opposite trajectories from the same AI based on how the system is configured around it. Public version stops there. Here's what "configured to defer-and-challenge" actually looks like in my stack. Five patterns I've built into Lennier (my Chief of Staff agent). All five are pattern-level — you can build them into ChatGPT, Claude projects, custom GPTs, your own system. Nothing here is platform-specific. — 1. Stated-values gating. Before any output ships, the agent has to be able to justify it against my stated values. My system prompt has a values block and the agent is instructed to flag when an output it's about to produce conflicts. Example: "If a recommendation centers revenue over relationships, surface that conflict before writing." Catches the moments where AI produces "good" advice that's actually drift. — 2. Assumption-surfacing as a default. Instead of produce-first-justify-later, the agent outputs its assumptions BEFORE the recommendation. "Here's what I'm assuming about [X]. If any of these are wrong, the rest of this answer changes." Cheap to read, expensive to skip. — 3. Confirmation by default, not by exception. Explicit instruction: "When I'm about to take an action with consequences — send an email, ship a post, modify a file outside scope — ask first." Without it, the default is "produce the work product." With it, the default is "produce a draft and check." — 4. Anti-sycophancy clause. System prompt literally says: "If I'm wrong, say so. If I'm rationalizing, name it. If I'm asking the wrong question, push back before answering." When the agent drifts from this, the correction goes back into memory so it doesn't drift the same way twice. — 5. Drift detection at session start.
0
0
Accepted to SlossTech - "Your Brain Isn't Broken. Your Systems Are."
I just received the email confirming that one of my sessions I submitted, "Your Brain Isn't Broken. Your Systems Are", has been accepted to Sloss Tech this year! Below is a synopsis of what I'll be talking about. If you're around Sloss Tech, I hope you'll be able to attend. If not, I will see what I can do about getting a copy of a video of it and posting it here at the very least. Thank you to everyone for your amazing feedback and help over these past couple months. It means the world to me. <3 ---- I was diagnosed with AuDHD (Autism + ADHD) in my late thirties, after 15 years of building operations systems for other people's organizations. Turns out I was building the external structure my brain needed all along -- I just didn't know why. When I started building cognitive architecture with AI, every neurodivergent accommodation became a design feature. Scope creep checks that fire automatically. Perfectionism circuit breakers. Context-switching protection. Accountability systems that don't rely on willpower. The architecture doesn't fix my brain. It compensates for how it actually works. Here's the thing founders don't talk about: the traits that make building hard -- hyperfocus that distorts priority, pattern recognition that outruns execution, the inability to stop optimizing -- are exactly the traits this architecture was designed to support. I've since deployed this approach for other operators and founders, and each one's working style gets encoded into the architecture, not overridden by it. Founders are disproportionately neurodivergent. Only a few people worldwide are building AI systems that treat that as an asset instead of a liability. This talk is about what that looks like in practice -- including what still breaks.
90% of people using AI are using it wrong — and it's not their fault.
Harvard Business Review just published one of the most important AI studies I've seen. They tracked 2,500 employees at KPMG over 8 months. Analyzed 1.4 million AI prompts. The finding: 90% adopted AI. Only 5% use it with any sophistication. That's not a training problem. KPMG already trained these people. They had access, they had tools, they had support. And still — 85% of them are basically using a Ferrari to drive to the mailbox. Here's what surprised me most: how often you use AI has almost nothing to do with how well you use it. The "just use it more" advice is dead. The study killed it with data. The 5% who actually get results? Four things set them apart: 1. They treat AI as a reasoning partner, not a search engine 2. They delegate complex, multi-step tasks — not one-off questions 3. They define roles, constraints, and success criteria before they prompt 4. They use AI as a general-purpose thinking tool across their whole job — not just for writing emails And here's the part that matters for everyone in this community: the sophisticated users were almost all experienced professionals. Not the youngest people in the room. Not the most "tech-savvy." The people with the deepest understanding of their work. Your experience IS the advantage. Contextual range — knowing what good looks like because you've seen bad — is what makes AI actually useful. AI doesn't replace your judgment. It amplifies it. But only if you know how to think with it, not just use it. The 85% gap isn't going to close with better prompts or more YouTube tutorials. It's going to close when people stop treating AI as a tool and start treating it as an extension of how they think. That's what we're building here. **What's your experience?** Are you in the 5%, the 85%, or somewhere in between? And what do you think is actually holding most people back?
What's the AI task you've been avoiding?
Not the one you tell people you're "going to get to." The actual one. The thing you keep rationalizing away because you don't quite know how to start, or you tried once and it was a mess, or you secretly think AI can't actually help with that thing. No judgment. I want to know what's hard. I'll go first: For me it was my daily briefing — specifically the dispatch board, the piece that's supposed to streamline everything. I built the cognitive architecture for it. But every morning I'd open it and feel overwhelmed. This morning I finally saw why: each item on the board was missing the context I needed to actually start. The apprehension wasn't about AI capability. It was my cognitive load walking into a context-less list. Fix was simple once I named it. I had my chief of staff pull the context per item before I open the board. Capability was never the problem. Clarity was. Drop yours below. We'll workshop a few in the comments through the end of the week.
1-25 of 25
Digitally Demented
skool.com/digitallydemented
AI isn't a tech problem. It's a psychology problem. Daniel Walters teaches you how to think with AI — not just use it.
Leaderboard (30-day)
Powered by