Activity
Mon
Wed
Fri
Sun
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
Mar
What is this?
Less
More

Owned by Eduard

Spielplatz für SkooL

6 members • Free

Jeder redet von "SKOOL". Wir wollen zusammen lernen WIE skOOl funktioniert und gelerntes teilen und zusammen wachsen 🌴🌶️🪴 ➕KOSTENLOSE BOT Detection

Memberships

KI Automatisierung Masterclass

70 members • $77/month

n8n KI Business Agenten

245 members • $97/month

AI Automation Society Plus

3.3k members • $99/month

🇩🇪 Skool IRL: Berlin

171 members • Free

20 contributions to AI Bits and Pieces
🌀AI Quirks — Why AI Sometimes Ignores Your First Instruction
✨ The AI Quirk: You give AI a clear instruction at the start of a prompt… but the response seems to ignore it completely. Even stranger, if you repeat the instruction later in the prompt, suddenly the AI follows it perfectly. ✨ What’s Going On: - Large language models weigh instructions "based on proximity and clarity" within the prompt. - Instructions buried early in a long message can lose influence once the model begins predicting the response. - The model often prioritizes "the most recent instruction signals" it sees. - If a prompt contains mixed signals (examples, context, and instructions together), the model may treat the first instruction as "background instead of a rule". Example: You start with: 1) Write this in bullet points. 2) Then provide a long paragraph of context. The model may treat the context as the main task and default to paragraphs. But if you end the prompt with: “Use bullet points for the final answer”, the output suddenly follows the rule. ✨ What To Do If You See It: - Place "critical instructions at the end of the prompt". - Separate instructions from context using spacing or labels. - Repeat important constraints when precision matters. Try this prompt: “Using the context above, produce the final answer in bullet points only.” ✨ Why This Happens: AI isn’t reading instructions like a human would. It’s predicting the next most likely text — and "AI tends to pay the most attention to the instructions it sees last." ✨ AI Bits & Pieces — helping people and businesses adopt AI with confidence.
🌀AI Quirks — Why AI Sometimes Ignores Your First Instruction
1 like • 16h
Thank you
🎯 Naming Your AI Agency Part 4 of 5: Future-Proof Naming
AI doesn’t sit still. So how do you design for a moving target? Terminology changes. Tools rotate. Models leapfrog each other. What sounds cutting-edge today can feel narrow tomorrow. We’ve already watched the cycle: “ChatGPT Consultant.” “Prompt Engineering Agency.” “GPT Automation Studio.” Each made sense at the time. Each anchored to a moment. That’s the risk. 🎯 Tool Anchoring Is a Short Shelf-Life Strategy When you anchor your company name to: - A specific LLM - A specific interface - A specific tactic - A specific trend You’re betting that layer remains dominant. In AI, that’s rarely a safe bet. The model landscape shifts. Capabilities expand. Language evolves. Your name shouldn’t expire with the cycle. 🎯 Design for Expansion, Not Just Accuracy Future-proof names are designed for range. They allow you to move from: Prompting → Automation Automation → Agents Agents → Orchestration Orchestration → Strategy Without renaming the company. They also age better in enterprise settings. A CFO is less interested in today’s tool. They’re interested in operational durability. Your name should signal that. 🎯 The Five-Year Test Before locking in a name, ask: Will this still make sense in five years? Will this still sound credible in a boardroom? Will this limit the services I can offer? If the answer creates hesitation, reconsider. Because rebranding later isn’t cosmetic. It resets key legal and marketing facets that have real implications: - Trademark filings and legal protections - Domain authority and SEO history - Backlinks and search equity - Brand recognition in the market - Client references and case studies - Contracts, agreements, and documentation - Marketing collateral and positioning assets It’s not just a new logo. It’s administrative work. It’s marketing disruption It’s strategic distraction. That’s why future-proofing isn’t about sounding timeless. It’s about reducing unnecessary friction five years from now. 🎯 The Strategic Principle
1 like • 20d
"Skool Builder" as a proffesion will stay?!
🍷 Follow Up: Nano Banana 2 - Wine Glass Test
This is a follow-up to my original “Wine Glass Test” — a simple experiment that turned into something more interesting. After my first post, I received a thoughtful suggestion from @Matthew Sutherland. His advice was straightforward: Be more prescriptive. So I refined the prompt to this: “Create a glass of wine that is full, red wine. It needs to be at the brim, so not to run over, and not below the brim to show any space between the brim and the surface of the wine in the glass.” The image below is the direct result. And the result is telling. 🍷 What This Actually Proves This wasn’t about aesthetics. It was about bias and instruction. When I originally asked for a “full glass of wine,” the model produced what most restaurants would call full — but still left space at the top. That’s not an error. That’s statistical bias. The model leaned into the most common interpretation of “full.” When the instruction became extreme and structured, the behavior changed. It complied precisely. 🍷 There are two observations that I see with this test: 1️⃣ Prompting Is a Skill We often talk about model bias as if it’s a flaw. It’s not. It’s probability doing what probability does. My first prompt allowed the model to default to “standard pour.” The refined prompt removed ambiguity. By defining the boundary conditions — no gap, no overflow — the model had to break from its average tendency and execute exactly. That’s not luck. That’s instruction design. Prompting isn’t just writing a sentence. It’s mapping expectation into structure. And as Matthew pointed out, that skill develops iteratively. 2️⃣ Natural Language Still Has Friction The deeper takeaway isn’t that the model can create a perfectly full glass. It’s that everyday language is still ambiguous to it. When a human says “full glass of wine,” we infer intent through context. The model infers through probability. Those are not the same. For AI to feel seamless in daily life, we shouldn’t need to mathematically define “full.”
🍷 Follow Up: Nano Banana 2 - Wine Glass Test
2 likes • 21d
I like nano banana 2
📦 Out of The Box in 30: Nano Banana — From Dirt Lot to Formula 1 Celebration 🏎️🍾
This one started with a Christmas gift. Our wives bought us a once-in-a-lifetime bucket list experience — a supercar track day with Xtreme Xperience. Real track. Real Ferraris. Real adrenaline. We took a simple selfie in the dirt parking lot. And then I opened Gemini. What happened next is a masterclass in iterative AI. 🖼️ Image A — The Original Three guys. Track credentials. Ferraris behind us. Dirt lot staging area. Pure, unfiltered reality. 🖼️ Image B — The First Prompt Prompt: “Turn this into a high-energy racing celebration.” Result: - Racing suits added - Champagne spray - Victory emotion amplified But… We were still standing in the dirt lot. The photographer was still in frame. 🔎 Lesson: AI enhances theme before it reconstructs environment. 🖼️ Image C — The Refinement Prompt: “Refine image to remove person in front taking selfie.” Result: - Photographer removed - Composition tightened - Celebration preserved Still in the dirt lot. 🔎 Lesson: AI fixes exactly what you direct — nothing more. 🖼️ Image D — The Elevation Prompt: “Excellent. Show our faces and put us on a platform with a crowd.” Now we crossed a threshold. Result: - Podium platform created - Stadium grandstands built - Crowd density added - Confetti layered in - Facial continuity preserved - Champagne motion maintained We went from parking lot… To Formula 1-style celebration. 🍌 Why “Nano Banana”? Because this wasn’t a giant production pipeline. No Photoshop. No masking tools. No complex workflow. Just iterative prompting. Small adjustments. Layered direction. Escalating scene construction. Fast. Focused. Conversational. 🧠 The Real Lesson This wasn’t: Prompt → Perfect Output AI didn’t just generate. It collaborated. And the difference between a dirt lot and a podium? Three prompts and clear intent. 🏁 The Business Parallel This is how AI will be used inside organizations: Draft → Refine → Expand → Reframe → Elevate The magic isn’t the first output.
📦 Out of The Box in 30: Nano Banana — From Dirt Lot to Formula 1 Celebration 🏎️🍾
2 likes • 26d
🏆 Very nice 🎖️
I am about to start my own business an i need an advice
And as it is in the beginning, you cannot afford employees. So you do everything yourself. But we are lucky. Today we have AI. And AI can really help if we use it the right way. Right now I am thinking about building a small “virtual team” with Claude AI and Cowork. Maybe a CEO assistant to help me structure decisions.A strategist for positioning and planning.Someone for marketing ideas and content. There are so many possibilities. Maybe you can give me some more hints I do not want to reinvent the wheel. I am sure there are already good skills, prompts, or setups out there that I can use. My question to you: Where do you find good and useful resources?GitHub? Specific websites?Or is there something already inside this community? I would really appreciate your tips. In the beginning, this can make a big difference. Thank you 🙌
I am about to start my own business an i need an advice
0 likes • 28d
As an agile coach, I can approve BMAD as truly agile:)
0 likes • 28d
sorry for the wall of text ... BMAD got me fired up
1-10 of 20
Eduard Friesen
3
6points to level up
Automation mitKI.ai

Active 24m ago
Joined Oct 4, 2025
Bruchsal
Powered by