Activity
Mon
Wed
Fri
Sun
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
Mar
Apr
What is this?
Less
More

Memberships

The Great AI Shift

3.3k members • Free

33 contributions to AI Bits and Pieces
🔴 3 Live Sessions: Claude Code Intro for All
🗓️ Session 2: April 10, 2026 at 5pm EST. (60-90 minutes) Check calendar for Session 2 and 3 dates and times. AI is getting closer to giving non-developers a real path to building full applications with natural language. Because one day, you may not think you’re writing software or workflows… but you will be. Maybe it looks something like: - Setting preferences for a fancy AI vacuum - Creating a daily brief from your favorite news and email accounts - Building a simple tool for your own workflow to improve a business process - And, yes, giving directions to your own personal humanoid robot. In any of these cases, learning how to speak to AI clearly will matter if you want practical and useful outcomes. 🎯 What This Live Series Will Cover: - I’ll be using Claude Code on Windows - We’re going to build the a simple task list app across three sessions, with a few differences - Each one will be built from scratch Session 1 — Vibe Code No planning. Just build. Minimal features. Session 2 — Vibe Code + Planning (April 10, 2026) Some planning. More structure and a few useful features. Session 3 — Skill Coding (Planning assumed) The most upfront planning. The most feature rich app. The most fun. 🏁 The goal is to help everyone, no matter their AI or programming experience (including zero) level, get a glimpse into the power of Claude Code. ❗ Beginners and complete newbies are absolutely welcome. It is to help people see what these tools can do, how planning changes the outcome, and why the shift to using natural language matters. We are getting closer to a world where more people will create software using natural language, where the quality of the idea starts to matter more than whether you have an engineering degree. At the same time, two things can be true at the same time - strong computer science fundamentals and great ideas will continue to compound for those who have both. @Matthew Sutherland @Nick Mohler @Usman Mohammed @Dena Dion @Mike AI Consultant
🔴 3 Live Sessions: Claude Code Intro for All
1 like • 3d
@Michael Wacht ok cool! I will add it to my calendar in case my work meeting is canceled.
2 likes • 3d
@Michael Wacht perfect!
🛡️Claude Code: Meeting Transcription Scrubber (Pre-Analysis)
I built a meeting transcript scrubber using 🛠️ Claude Code for my AI consulting practice AI & Data Strategies. Every client meeting contains sensitive information that should never end up in an AI analysis. Employee names, personnel discussions, legal exposure, HIPAA-adjacent content — it's all in there mixed in with the good stuff. I got tired of manually reviewing transcripts before running them through analysis so I built a meeting transcript scrubber. 🔍 What it Does: The scrubber runs your raw meeting transcript through very specialized review lenses, using Claude Code Skills, before any analysis happens. Each lens looks for something different: - HR Review - Manager Review - Client Review - Sensitivity Review - HIPAA Review - Pseudo-Legal Review - Off-Content Filter - Noise Filter Each flagged line shows you exactly which lens caught it, why, and a severity level — ⚠️ Critical, Warning, or Advisory. One important discovery on the Noise Filter. While exploring and testing I discovered that removing too much from the noise filter actually dulls your sentiment analysis. 🎯 Short affirmations like: "mm-hmm", "right", "absolutely" and "yeah" feel like noise but they're actually sentiment signals. They tell you the listener is engaged, agreeing, or following along. Strip them out and your emotional arc analysis loses texture. 📝 So I narrowed the Noise Filter to only flag truly zero-value content: - Technical meeting artifacts — "you're on mute", "can you see my screen", "let me share my screen" - Failed audio references — "sorry you cut out", "I missed that" - Pure scheduling logistics — "I'll send a calendar invite", "let's find a time" Everything else — even one word responses — stays in the transcript because it carries some signal about engagement, energy, or attitude. 🔄 The workflow What I really like about how this turned out is the iterative scrubbing flow. You don't have to run all nine lenses at once. Run HR first, clean it, then run Client, clean it again, then run Legal. Each pass loads the cleaned version back to the top automatically. You're always working on the cleanest version of the transcript.
🛡️Claude Code: Meeting Transcription Scrubber (Pre-Analysis)
3 likes • 9d
@Michael Wacht this is pretty cool! Are you going to share how you built this?
🎉 AI in Real Life: Party Planning w/ ChatGPT (Not What You Think)
I had a conversation with a friend recently that stuck with me. “I asked ChatGPT how to keep a conversation going.” 💬 What? Why? “Small talk,” they said. They’re the kind of person who shows up, listens, and pays attention. But when there’s an opening to jump into a conversation, they don’t always take it. Not because they don’t have anything to say. More because they’re not always sure where to go next. So they tried something different. 🌱 A few years ago, that probably would have sounded unusual. Now it doesn’t. What stood out wasn’t the question itself. It was what it replaced. This is the kind of thing people used to: - Ask a friend about - Talk through with a mentor - Work out over time through trial and error Now they’re opening a phone or a laptop and starting there instead. 🔄 We talked through a simple example. Someone says: “I went golfing yesterday.” ⛳ Most people go here: - “What did you shoot?” Which is fine. But it often ends quickly. Instead, ChatGPT suggested shifting the direction: - “How did you get into golf?” - “Do you play regularly?” - “What do you enjoy most about it?” Same moment. More momentum. 🏌️‍♂️ They told me the difference wasn’t dramatic. They didn’t suddenly become outgoing. They just didn’t get stuck. They stayed in conversations a little longer. Asked a few better questions. Moved past that pause where things usually stop. And that points to something bigger. 💡 People are using AI for the kinds of personal conversations. Not to replace human input. To prepare for it. They’re using it to: - Think through problems logically - Remove emotion before responding - Practice conversations before they happen - Get quick advice It is not for final answers. It is to gain clarity - and confidence. And that’s the shift. 🎯 AI isn’t just helping people produce. It’s helping them process. ✨ That's AI in Real Life.
🎉 AI in Real Life: Party Planning w/ ChatGPT (Not What You Think)
2 likes • 17d
@Michael Wacht so true!
🎥 Out of the Box in 30: Sora 2 ReDux (Let’s Have Some Fun)
Welcome to the Out of the Box series — where I explore what can be built with no-code and low-code AI tools in 30 minutes or less. No manuals. No tutorials. Just curiosity and creation in motion. This time I revisited Sora 2 a few months later to see how the experience has evolved. App: Sora by OpenAI Time: Under 30 Minutes Category: AI Video Creation / Prompt-Directed Video Video Title: Move Over Rover, The Dog Days of Coding Are Over - Claude Code is The Cats Meow 🎥 What Is Sora? Sora is an AI video generation platform that transforms a simple text prompt into lifelike, cinematic scenes — complete with motion, lighting, and visual storytelling. Think of it as having a director, camera crew, and editor… all powered by a prompt. ⚙️ Experience 1 — The First Test A few months ago, I ran an Out of the Box experiment with Sora using a simple presenter-style scene. The results were impressive for early generative video, but the workflow still felt a bit like experimentation. The outputs were interesting, but not something that added much practical value beyond demonstrating what the technology could do. If you’re curious about that original test, you can see the full post here: 👉 https://www.skool.com/ai-bits-and-pieces/out-of-the-box-in-30-sora-2?p=e63f6633 That first experiment helped show what was possible, but the bigger question was how quickly the experience would evolve. ⚙️ Experience 2 — Revisiting It Today For the second experiment, I tried something completely different — a playful, high-motion scene designed to test character behavior and storytelling. Prompt theme: A cat driving a quad runner at high speed — Fast & Furious style — with a dog riding on the back howling and clearly terrified. The twist: - The cat is labeled “Claude Code.” - The dog is labeled “ChatGPT.” Experiment 2 Video: https://sora.chatgpt.com/p/s_69b4d4703dbc819180c914a61747c81f?psh=HXVzZXItQWI5dFRpa3JRS1RTSmhwbDY3VlFYaWxv.4nGp4ZY9Gsxo
🎥 Out of the Box in 30: Sora 2 ReDux (Let’s Have Some Fun)
3 likes • 25d
@Michael Wacht nice!
🍷 Follow Up: Nano Banana 2 - Wine Glass Test
This is a follow-up to my original “Wine Glass Test” — a simple experiment that turned into something more interesting. After my first post, I received a thoughtful suggestion from @Matthew Sutherland. His advice was straightforward: Be more prescriptive. So I refined the prompt to this: “Create a glass of wine that is full, red wine. It needs to be at the brim, so not to run over, and not below the brim to show any space between the brim and the surface of the wine in the glass.” The image below is the direct result. And the result is telling. 🍷 What This Actually Proves This wasn’t about aesthetics. It was about bias and instruction. When I originally asked for a “full glass of wine,” the model produced what most restaurants would call full — but still left space at the top. That’s not an error. That’s statistical bias. The model leaned into the most common interpretation of “full.” When the instruction became extreme and structured, the behavior changed. It complied precisely. 🍷 There are two observations that I see with this test: 1️⃣ Prompting Is a Skill We often talk about model bias as if it’s a flaw. It’s not. It’s probability doing what probability does. My first prompt allowed the model to default to “standard pour.” The refined prompt removed ambiguity. By defining the boundary conditions — no gap, no overflow — the model had to break from its average tendency and execute exactly. That’s not luck. That’s instruction design. Prompting isn’t just writing a sentence. It’s mapping expectation into structure. And as Matthew pointed out, that skill develops iteratively. 2️⃣ Natural Language Still Has Friction The deeper takeaway isn’t that the model can create a perfectly full glass. It’s that everyday language is still ambiguous to it. When a human says “full glass of wine,” we infer intent through context. The model infers through probability. Those are not the same. For AI to feel seamless in daily life, we shouldn’t need to mathematically define “full.”
🍷 Follow Up: Nano Banana 2 - Wine Glass Test
3 likes • Feb 28
@Michael Wacht very cool!
1-10 of 33
Jason Hagen
4
27points to level up
@jason-hagen-3730
I do a little bit of everything.

Active 12m ago
Joined Sep 18, 2025
Puyallup, WA
Powered by