Activity
Mon
Wed
Fri
Sun
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
Mar
Apr
May
What is this?
Less
More

Memberships

AI Automation Vault

9.9k members • Free

AI Automation Society Plus

3.6k members • $99/month

Claude Code Kickstart

533 members • Free

AI Automation Agency Hub

315.5k members • Free

Agentic Academy

1.3k members • $37/month

AI Accelerator

18.5k members • Free

AI Bits and Pieces

719 members • Free

Brendan's AI Community

24.5k members • Free

AI Sales Agency Launchpad

14.8k members • Free

7 contributions to AI Bits and Pieces
NotebookLM in 10 Bites: Set Up (1/10)
Are you open to try something new together? 🗓️ For this series, I’ll walk through NotebookLM one small step at a time—each one you can digest in under 3 minutes. Not a masterclass. Not a firehose. Just one bite-sized step each day that builds on the next. NotebookLM is one of those tools that once you understand how to use it, it can turn documents, notes, transcripts, and messy information into something useful. That’s the goal here. By the end of the 10 days, you should understand the basics well enough to start using NotebookLM for your own work, learning, or projects. 🗓️ Day 1 — Get Set Up For today, just get signed in. If you already have an account, great — you’re one step ahead. ✅ Step 1: Go to NotebookLM and create your account - Go to: https://notebooklm.google.com - Click “Try NotebookLM” - Sign in with your Google account If you already use Gmail, this takes about 10 seconds. That’s it for today. Most people don’t get stuck because something is hard. They get stuck because they haven’t started. This step removes friction. Today is just about getting started. 🚀 Tomorrow, we’ll create your first notebook and add source information. What is NotebookLM? 📒 NotebookLM is an AI tool from Google that works with your documents. Instead of pulling from the internet, it helps you organize, summarize, and generate insights from the information you provide — notes, PDFs, transcripts, and more.
NotebookLM in 10 Bites: Set Up (1/10)
2 likes • 16h
@Michael Wacht interesting split to 10 bites/steps in 10 days. Will you place all 10 of them to AI Practitioner of Classroom?
🎉Celebrating 600 Members and Growing!
We just crossed 600 members in AI Bits & Pieces. Consistent growth from day one, fueled by people trying to understand what AI actually means for their work and day-to-day life—and how it can help them stand out in the workforce, business environment, or executive ranks. That’s been the goal from the start. A place for: 🔵 AI Curious — figuring out what this all is 🟢 AI Enthusiasts — using it regularly 🟠 AI Practitioners — applying it to real work 🟣 Enterprise — thinking about scale across teams What’s been interesting isn’t just the number—it’s the mix of people and the conversations starting to take shape. Members are building small things. Members are asking in-depth questions. And members are starting to connect the dots between tools and outcomes. A special shoutout to each and every member, and the people who have supported me from the beginning: @Michele Wacht @Dena Dion @Debra Schmitt @Patti Hoekstra @Mark Zayec @Matthew Sutherland @Jason Hagen @Usman Mohammed @Nick Mohler @Eduard Friesen We have some exciting updates and new offerings for the community designed to help you win the AI game in life, at work, as a business owner, or as an agency. A heartfelt thank you. Michael
🎉Celebrating 600 Members and Growing!
1 like • 26d
Congrats, @Michael Wacht Keep going to your next goal for this community by adding more interesting and useful content.
🌀AI Quirks — When AI Matches Your Prompt Tone Too Well
🌀 The Quirk: When a prompt sounds authoritative, AI often mirrors that confidence — even if the answer itself is a best guess. 🌀What’s Going On: - AI is trained to mirror tone as much as intent. - Confident prompts signal “this is established knowledge.” - The model fills in missing context with the most likely answer. - Fluency can hide uncertainty, especially with new tools or edge cases. 🌀 What To Do If You See It: - Ask the model to flag assumptions before answering. - Request uncertainty explicitly: “What might be wrong here?” - Reframe the prompt as exploratory, not declarative. 👉 Try these prompts: “Answer cautiously. If any part is a guess, say so.” “Answer cautiously. If you’re unsure about any part, say so.” “Answer cautiously. Identify any assumptions and note where certainty is low.” “Answer cautiously. Call out any guesses.” Why This Matters: AI confidence is a delivery style, not a truth signal. Knowing when to slow the model (LLM) down is part of real AI fluency. 🎯 AI Bits & Pieces — helping people and businesses adopt AI with confidence.
2 likes • Feb 5
Saw this as well: I wrote a confident prompt and got a very confident reply. Then I asked clarification questions and LLM is not so confident any more.
1 like • Feb 6
@Michael Wacht exactly. Humans created LLM and gave it not only good, but "not very good" human's features. 🙃
🤝Community Spotlight: Matthew Sutherland
Today, we would like to recognize @Matthew Sutherland for his contributions inside the AI Bits & Pieces community. Matthew has been a great sounding board, and has also spent time reviewing Claude Code–related content and providing specific, actionable suggestions. His feedback has focused on structure, clarity, and how ideas translate into practical use. His comments are thoughtful and grounded in hands-on experience. They tend to clarify intent, tighten explanations, and make the material easier to apply for others working through the same topics. Outside the community, Matthew is the founder of Byteflow AI, where he builds and runs AI systems that automate real operational work. His focus is on workflows, agents, and integrations that run in production and support day-to-day business execution. His work follows a clear framework — Scope. Shoot. Solve. — emphasizing problem definition, working deliverables, documentation, and clean handoff. Engagements range from operational assessments and system builds to incident response and targeted briefings. With more than 25 years of experience across technology, operations, and business development, Matthew brings a practical, execution-first perspective to applied AI and automation. We appreciate the time and care Matthew puts into strengthening shared work and contributing to the quality of the conversation. Thank you, Matthew, for the role you play in helping this community learn and improve together. Follow Matt on LinkedIn: https://www.linkedin.com/in/matthewsutherland/ For a highlight of Matt's Post: https://www.skool.com/ai-bits-and-pieces/classroom/5bebee2e?md=13b025f0574742bca30bc136b78d0d7e
🤝Community Spotlight: Matthew Sutherland
2 likes • Feb 6
Congrats @Matthew Sutherland! You definitely deserve it.
💎 Prompt Series Part 3 of 5: When LLM Selection Starts to Matter
After learning how to prompt clearly and iterate effectively, a natural question emerges: Does it matter which LLM I use if I’m iterating well? In the short run, the honest answer is no. If you’re clear in your intent and willing to refine direction, most modern LLMs will get you where you need to go. Prompting and iteration do a lot of the heavy lifting early on. That’s why many people experience an initial breakthrough and think, “Okay, I’ve got this.” And they do. At first. 💎 Why Iteration Levels the Field Early When you’re iterating well, you’re doing a few important things: - Clarifying what you actually want - Responding to output instead of restarting - Adjusting direction in small, intentional steps Those behaviors transfer. They work across LLMs because the interaction pattern is the same: input → response → refinement. In that phase, differences between LLMs fade into the background. You’re building skill, not dependency. 💎 When Fit Begins to Show Up As AI becomes something you use regularly—not occasionally—another shift starts to happen. You’re no longer experimenting. You’re working. And that’s when fit begins to show up. Not in dramatic ways In small ones that compound over time. You notice how an LLM responds to follow-ups. How much structure it assumes. How easily you can steer it without over-explaining. Tone and writing style are often where this becomes most obvious. Some people gravitate toward Claude because it feels more measured, structured, and editorial. Others prefer ChatGPT because it feels more conversational, adaptive, and easy to steer through quick iteration. Neither is better. They simply feel different to work with. And once AI becomes part of your daily rhythm, those differences start to matter. To be clear, this isn’t about specialty capabilities like coding, image creation, or domain-specific features. It’s about how naturally an LLM mirrors: - Your tone - Your writing style - The way you think through ideas
💎 Prompt Series Part 3 of 5: When LLM Selection Starts to Matter
1 like • Feb 5
I liked the Fluency part. It's so true: Fluency in interactions came with LLM usage experience to feel natural.
1-7 of 7
Serge Petryk
2
7points to level up
@serge-petrik-5969
Automation Builder helps SMB owners with AI Projects: AI Audit, AI Workflow Automation (n8n, Make), Implementation, Optimization & Support (Retainer)

Active 7h ago
Joined Feb 2, 2026
Ukraine 🇺🇦
Powered by