Activity
Mon
Wed
Fri
Sun
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
Mar
What is this?
Less
More

Owned by Matthew

AI for Life

33 members • Free

Claude Code lessons for Mac users. Operators share automation frameworks that work in production. Discover the highest-ROI automation opportunities.

Memberships

Claude Code Kickstart

544 members • Free

Skoolers

189k members • Free

AI Automation Society

298.7k members • Free

AI Bits and Pieces

579 members • Free

AI Automation Society Plus

3.3k members • $99/month

87 contributions to AI Bits and Pieces
🗳️ First Live Session Survey!
We’re planning upcoming live sessions and want to make sure it’s focused on what you want most. - What topic would you like us to cover? - Drop real use cases — we may build around them. 👉 We’ll run sessions on the top 2 choices first.
Poll
13 members have voted
4 likes • 20h
@Michael Wacht Nano 🍌's I think it's a really interesting platform and has massive potential.
How much do students use AI at Yale for school work?
I read an article titled “how much do students at Yale actually use AI for school work? There was a study done using Fizz, an anonymous polling app. So, the question was presented and thousands of students responded and the results were remarkable to anyone who has preconceived ideas about the way Yale students learn. 75% reported using ChatGPT. More than 1/3 admitted using it to write essays. 25%reported using it to complete half their academic work. Later they polled an additional 400 students about whether they knew about Yale’s official artificial intelligence guidelines, which can be found on the website of the university’s Porvoo Center for Teaching and Learning. Eighty-eight percent were unaware of them. In the process of researching the story, they found more Fizz polls other students had conducted. In one of them, more than 3,000 students, or nearly half the undergraduates at Yale, responded. Eighty-four percent reported using ChatGPT, an even higher percentage than my earlier polls suggested. I use ChatGPT to edit my stories for my book, so I found this article interesting and wanted to share it.
0 likes • 20h
If that's the case, I should have an honorary doctorate degree by now from Yale for use in ChatGPT and Claude. That must be between $300,000 or $400,000 minimum that I just saved. Please refer to me as Dr. Sutherland now. 🤣
🌀AI Quirks — Why AI Sometimes Ignores Your First Instruction
✨ The AI Quirk: You give AI a clear instruction at the start of a prompt… but the response seems to ignore it completely. Even stranger, if you repeat the instruction later in the prompt, suddenly the AI follows it perfectly. ✨ What’s Going On: - Large language models weigh instructions "based on proximity and clarity" within the prompt. - Instructions buried early in a long message can lose influence once the model begins predicting the response. - The model often prioritizes "the most recent instruction signals" it sees. - If a prompt contains mixed signals (examples, context, and instructions together), the model may treat the first instruction as "background instead of a rule". Example: You start with: 1) Write this in bullet points. 2) Then provide a long paragraph of context. The model may treat the context as the main task and default to paragraphs. But if you end the prompt with: “Use bullet points for the final answer”, the output suddenly follows the rule. ✨ What To Do If You See It: - Place "critical instructions at the end of the prompt". - Separate instructions from context using spacing or labels. - Repeat important constraints when precision matters. Try this prompt: “Using the context above, produce the final answer in bullet points only.” ✨ Why This Happens: AI isn’t reading instructions like a human would. It’s predicting the next most likely text — and "AI tends to pay the most attention to the instructions it sees last." ✨ AI Bits & Pieces — helping people and businesses adopt AI with confidence.
🌀AI Quirks — Why AI Sometimes Ignores Your First Instruction
2 likes • 5d
@Akihiko Asada Welcome!
2 likes • 5d
@Michael Wacht I use it in my dev/client using specific labeling instructions on them for easy identification. INTERNAL, RESEARCH, CONTEXT, PUBLISHED etc. Athena keeps them in perfect order.
Claude Code just added a /color command.
You can now set your prompt bar to red, blue, green, yellow, purple, orange, pink, or cyan. I run multiple Claude Code sessions at the same time. Different projects, different repos, different contexts. Before this, they all looked identical. Tab back to the wrong terminal and you're talking to the wrong session about the wrong codebase. Now I color-code them. Red for the client build. Blue for internal ops. Green for experiments I might throw away. One glance and I know exactly which session I'm in. Single terminal? Skip it. Multiple sessions? Try it. /color red
Claude Code just added a /color command.
0 likes • 6d
Wow I am such a Nerd 🖥️, I accept that. 🤣
0 likes • 5d
@Dena Dion Ha, that's a great example of how natural voice control has gotten. A couple years ago you'd have needed a specific app, a hub, and probably 30 minutes of setup just to change a light color. Now you just talk to it and it works.
🎥 Out of the Box in 30: Sora 2 ReDux (Let’s Have Some Fun)
Welcome to the Out of the Box series — where I explore what can be built with no-code and low-code AI tools in 30 minutes or less. No manuals. No tutorials. Just curiosity and creation in motion. This time I revisited Sora 2 a few months later to see how the experience has evolved. App: Sora by OpenAI Time: Under 30 Minutes Category: AI Video Creation / Prompt-Directed Video Video Title: Move Over Rover, The Dog Days of Coding Are Over - Claude Code is The Cats Meow 🎥 What Is Sora? Sora is an AI video generation platform that transforms a simple text prompt into lifelike, cinematic scenes — complete with motion, lighting, and visual storytelling. Think of it as having a director, camera crew, and editor… all powered by a prompt. ⚙️ Experience 1 — The First Test A few months ago, I ran an Out of the Box experiment with Sora using a simple presenter-style scene. The results were impressive for early generative video, but the workflow still felt a bit like experimentation. The outputs were interesting, but not something that added much practical value beyond demonstrating what the technology could do. If you’re curious about that original test, you can see the full post here: 👉 https://www.skool.com/ai-bits-and-pieces/out-of-the-box-in-30-sora-2?p=e63f6633 That first experiment helped show what was possible, but the bigger question was how quickly the experience would evolve. ⚙️ Experience 2 — Revisiting It Today For the second experiment, I tried something completely different — a playful, high-motion scene designed to test character behavior and storytelling. Prompt theme: A cat driving a quad runner at high speed — Fast & Furious style — with a dog riding on the back howling and clearly terrified. The twist: - The cat is labeled “Claude Code.” - The dog is labeled “ChatGPT.” Experiment 2 Video: https://sora.chatgpt.com/p/s_69b4d4703dbc819180c914a61747c81f?psh=HXVzZXItQWI5dFRpa3JRS1RTSmhwbDY3VlFYaWxv.4nGp4ZY9Gsxo
🎥 Out of the Box in 30: Sora 2 ReDux (Let’s Have Some Fun)
2 likes • 8d
@Michael Wacht The fact that Claude Code is the cat calmly driving while ChatGPT is the dog howling on the back might be the most accurate AI metaphor of 2026. Great breakdown as always, Michael. The iterative workflow improvements are where the real story is.
2 likes • 6d
@Muskan Ahlawat 👊🏻
1-10 of 87
Matthew Sutherland
5
64points to level up
@matthew-sutherland-4604
AI Automation Architect @ ByteFlowAI | Host of AI for Life (Claude.ai, CoWork, Claude Code for Mac). Execution first.

Online now
Joined Dec 14, 2025
Mid-West, United States
Powered by