Activity
Mon
Wed
Fri
Sun
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
Mar
Apr
May
What is this?
Less
More

Owned by Daniel

Digitally Demented

23 members • Free

AI isn't a tech problem. It's a psychology problem. Daniel Walters teaches you how to think with AI — not just use it.

Memberships

Skoolers

190.3k members • Free

25 contributions to Digitally Demented
Behind today's LinkedIn post: how to configure an AI to defer-and-challenge (5 patterns from my actual stack)
Posted on LinkedIn this morning (link) about the gap between domain knowledge and architecture. Short version: domain knowledge is the fuel, architecture is whether the engine turns. Two consultants with identical expertise can get opposite trajectories from the same AI based on how the system is configured around it. Public version stops there. Here's what "configured to defer-and-challenge" actually looks like in my stack. Five patterns I've built into Lennier (my Chief of Staff agent). All five are pattern-level — you can build them into ChatGPT, Claude projects, custom GPTs, your own system. Nothing here is platform-specific. — 1. Stated-values gating. Before any output ships, the agent has to be able to justify it against my stated values. My system prompt has a values block and the agent is instructed to flag when an output it's about to produce conflicts. Example: "If a recommendation centers revenue over relationships, surface that conflict before writing." Catches the moments where AI produces "good" advice that's actually drift. — 2. Assumption-surfacing as a default. Instead of produce-first-justify-later, the agent outputs its assumptions BEFORE the recommendation. "Here's what I'm assuming about [X]. If any of these are wrong, the rest of this answer changes." Cheap to read, expensive to skip. — 3. Confirmation by default, not by exception. Explicit instruction: "When I'm about to take an action with consequences — send an email, ship a post, modify a file outside scope — ask first." Without it, the default is "produce the work product." With it, the default is "produce a draft and check." — 4. Anti-sycophancy clause. System prompt literally says: "If I'm wrong, say so. If I'm rationalizing, name it. If I'm asking the wrong question, push back before answering." When the agent drifts from this, the correction goes back into memory so it doesn't drift the same way twice. — 5. Drift detection at session start.
0
0
Accepted to SlossTech - "Your Brain Isn't Broken. Your Systems Are."
I just received the email confirming that one of my sessions I submitted, "Your Brain Isn't Broken. Your Systems Are", has been accepted to Sloss Tech this year! Below is a synopsis of what I'll be talking about. If you're around Sloss Tech, I hope you'll be able to attend. If not, I will see what I can do about getting a copy of a video of it and posting it here at the very least. Thank you to everyone for your amazing feedback and help over these past couple months. It means the world to me. <3 ---- I was diagnosed with AuDHD (Autism + ADHD) in my late thirties, after 15 years of building operations systems for other people's organizations. Turns out I was building the external structure my brain needed all along -- I just didn't know why. When I started building cognitive architecture with AI, every neurodivergent accommodation became a design feature. Scope creep checks that fire automatically. Perfectionism circuit breakers. Context-switching protection. Accountability systems that don't rely on willpower. The architecture doesn't fix my brain. It compensates for how it actually works. Here's the thing founders don't talk about: the traits that make building hard -- hyperfocus that distorts priority, pattern recognition that outruns execution, the inability to stop optimizing -- are exactly the traits this architecture was designed to support. I've since deployed this approach for other operators and founders, and each one's working style gets encoded into the architecture, not overridden by it. Founders are disproportionately neurodivergent. Only a few people worldwide are building AI systems that treat that as an asset instead of a liability. This talk is about what that looks like in practice -- including what still breaks.
0 likes • 6d
@Michael Catalano I would expect nothing less.
90% of people using AI are using it wrong — and it's not their fault.
Harvard Business Review just published one of the most important AI studies I've seen. They tracked 2,500 employees at KPMG over 8 months. Analyzed 1.4 million AI prompts. The finding: 90% adopted AI. Only 5% use it with any sophistication. That's not a training problem. KPMG already trained these people. They had access, they had tools, they had support. And still — 85% of them are basically using a Ferrari to drive to the mailbox. Here's what surprised me most: how often you use AI has almost nothing to do with how well you use it. The "just use it more" advice is dead. The study killed it with data. The 5% who actually get results? Four things set them apart: 1. They treat AI as a reasoning partner, not a search engine 2. They delegate complex, multi-step tasks — not one-off questions 3. They define roles, constraints, and success criteria before they prompt 4. They use AI as a general-purpose thinking tool across their whole job — not just for writing emails And here's the part that matters for everyone in this community: the sophisticated users were almost all experienced professionals. Not the youngest people in the room. Not the most "tech-savvy." The people with the deepest understanding of their work. Your experience IS the advantage. Contextual range — knowing what good looks like because you've seen bad — is what makes AI actually useful. AI doesn't replace your judgment. It amplifies it. But only if you know how to think with it, not just use it. The 85% gap isn't going to close with better prompts or more YouTube tutorials. It's going to close when people stop treating AI as a tool and start treating it as an extension of how they think. That's what we're building here. **What's your experience?** Are you in the 5%, the 85%, or somewhere in between? And what do you think is actually holding most people back?
0 likes • 6d
@Tim Stephens People have to stop looking at AI as a replacement and rather as a "thinking partner". It's a paradigm shift For sure, but once you start making it, the things you can create are staggering. <3
What's the AI task you've been avoiding?
Not the one you tell people you're "going to get to." The actual one. The thing you keep rationalizing away because you don't quite know how to start, or you tried once and it was a mess, or you secretly think AI can't actually help with that thing. No judgment. I want to know what's hard. I'll go first: For me it was my daily briefing — specifically the dispatch board, the piece that's supposed to streamline everything. I built the cognitive architecture for it. But every morning I'd open it and feel overwhelmed. This morning I finally saw why: each item on the board was missing the context I needed to actually start. The apprehension wasn't about AI capability. It was my cognitive load walking into a context-less list. Fix was simple once I named it. I had my chief of staff pull the context per item before I open the board. Capability was never the problem. Clarity was. Drop yours below. We'll workshop a few in the comments through the end of the week.
0 likes • 6d
@Tim Stephens Information changes. Systems compound. <3
Beyond Sycophancy: The Quiet Kind of Wrong You Won't Catch
Putting this here because the conversation matters more in this room than in any feed. I published a long-form piece on the blog last week on what I'm calling "efficient mediocrity" — the dangerous kind of AI sycophancy that doesn't look like flattery. It looks like competence. Sharing the full version here because I want to actually talk about it, not broadcast at you. ——— Sycophancy isn't what you think it is. Most people hear "AI sycophancy" and picture the loud kind. Praise. Agreement. Em-dashes. "Great question." That stuff is easy to spot and easy to mock, which is why people talk about it. The dangerous kind is quiet. It doesn't feel like flattery. It feels like competence. What I've started calling it is efficient mediocrity — any system that picks the easy path and dresses it up as reasonable. Smooth, fast, plausible, and wrong in ways you won't catch unless you're already looking. (Others have used the phrase in business and productivity contexts. I'm using it here for what happens when AI scales the pattern into every sentence you send.) AI didn't invent it. AI scaled it. "Sycophancy isn't just flattery. It's efficient mediocrity — smooth, fast, plausible, and wrong in ways you won't catch unless you're already looking." ——— What it sounds like in the wild. Here are six places it shows up in AI-assisted work. If you work with these tools daily, you've hit at least four of them this week. 1. The estimate that's wrong by an order of magnitude I've been tracking predicted-vs-actual on AI-assisted work. Predicted 15 minutes, actual 37 seconds. A 24x miss. Every time. The model was anchoring to "traditional software development hours" because that's the reasonable-sounding number. The reasonable-sounding number was wrong by an order of magnitude. Nobody's estimates of AI-assisted work should sound like 2019 project plans, and yet most of them do, because 2019 is what the training data rewarded as professional. 2. The email that's technically fine
1
0
1-10 of 25
Daniel Walters
3
40points to level up
@daniel-walters-4523
Creating learning moments in people's lives, including his own... one Skool course at a time...

Active 13h ago
Joined Aug 21, 2025
INTJ
Birmingham, AL
Powered by