User
Write something
I've built something. Want your honest take.
Some of you have heard me talk about the system I use to run my consulting business — the AI operating system I've been building for the past 10 weeks. I finally put a name on it and a website behind it. It's called Refracted Cortex. The short version: it sits on top of whatever AI you already use (Claude, GPT, Gemini) and makes it actually remember who you are. Your values, your commitments, your blind spots. It doesn't reset every session. It pushes back when your decisions don't match what you said matters to you. Everything I've been teaching in Connected Intelligence about context, cognitive architecture, and thinking with AI — this is what it looks like when you turn that into a product. The site is live and I'm opening a founding waitlist. Wanted to share it here first, before I share on LinkedIn tomorrow. 20 spots at $97/mo (locked for life — standard will be $197). But honestly? I'm posting this here first because I trust this group's judgment more than the algorithm. If you don't mind talking the time, I'd like to know: 1. Does the concept land when you read the site? 2. What's confusing or feels like a stretch? 3. Would you use something like this? Why or why not? Link: https://refractedcortex.ai Brutal honesty welcome. That's how this place works.
90% of people using AI are using it wrong — and it's not their fault.
Harvard Business Review just published one of the most important AI studies I've seen. They tracked 2,500 employees at KPMG over 8 months. Analyzed 1.4 million AI prompts. The finding: 90% adopted AI. Only 5% use it with any sophistication. That's not a training problem. KPMG already trained these people. They had access, they had tools, they had support. And still — 85% of them are basically using a Ferrari to drive to the mailbox. Here's what surprised me most: how often you use AI has almost nothing to do with how well you use it. The "just use it more" advice is dead. The study killed it with data. The 5% who actually get results? Four things set them apart: 1. They treat AI as a reasoning partner, not a search engine 2. They delegate complex, multi-step tasks — not one-off questions 3. They define roles, constraints, and success criteria before they prompt 4. They use AI as a general-purpose thinking tool across their whole job — not just for writing emails And here's the part that matters for everyone in this community: the sophisticated users were almost all experienced professionals. Not the youngest people in the room. Not the most "tech-savvy." The people with the deepest understanding of their work. Your experience IS the advantage. Contextual range — knowing what good looks like because you've seen bad — is what makes AI actually useful. AI doesn't replace your judgment. It amplifies it. But only if you know how to think with it, not just use it. The 85% gap isn't going to close with better prompts or more YouTube tutorials. It's going to close when people stop treating AI as a tool and start treating it as an extension of how they think. That's what we're building here. **What's your experience?** Are you in the 5%, the 85%, or somewhere in between? And what do you think is actually holding most people back?
Green, Yellow, or Red? Real scenario.
I want to try something with this community. I'm going to describe a real work scenario, and I want you to tell me how you'd categorize it. The scenario: Your boss asks you to create a presentation for the quarterly board meeting. The presentation needs to include: - Revenue numbers from last quarter (pulled from your internal finance system) - - A competitive analysis of 3 key competitors - - Strategic recommendations for next quarter - - An appendix with employee satisfaction survey results One task. Four very different components. Here's my take -- but I want to hear yours first: Some parts of this are clearly Green (let AI handle it). Some are probably Yellow (AI assists, you verify). And at least one might be Red (keep AI away entirely). How would you break this down? Which parts would you hand to AI, which would you verify carefully, and which would you keep AI away from entirely? And why? Drop your thinking below. There's no single right answer -- that's what makes this interesting. The way YOU think about it depends on your industry, your company, and your risk tolerance. I'll share my breakdown in the comments tomorrow.
The thing nobody warns you about with AI
I'll be honest about something. I use AI every single day. I've built systems around it. I teach a course on it. And about twice a week, I still get output that makes me want to close my laptop and go outside. Yesterday I spent 20 minutes trying to get Claude to write a simple client email. Twenty minutes. For an email. I could have written it myself in three. The problem wasn't the tool. The problem was that I was being lazy about context. I was rushing. I gave it a vague ask and expected a specific result. And every time it gave me something generic, I got more frustrated instead of stopping to think about what I was actually asking for. Here's what I've learned: AI frustration is almost always a mirror. When I'm frustrated with the output, it's usually because I haven't done the thinking work. I haven't been clear about what I want, who it's for, or what "good" looks like. That doesn't make the frustration less real. It just makes it useful information. What's your most recent AI frustration? And in hindsight, was the problem the tool -- or was it something about how you were using it?
1
0
1-4 of 4
Digitally Demented
skool.com/digitallydemented
AI isn't a tech problem. It's a psychology problem. Daniel Walters teaches you how to think with AI — not just use it.
Leaderboard (30-day)
Powered by