Activity
Mon
Wed
Fri
Sun
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
Mar
Apr
May
What is this?
Less
More

Owned by Sean

For everyday people using AI (LLM’s) for real work and life tasks; writing your CV, thinking, organising. No hype, no agents. Just useful outcomes.

Memberships

Write, Publish, Influence

184 members • Free

UK Skoolers

369 members • Free

Synthesizer: Free Skool Growth

42k members • Free

30-Day Skool Hackathon

563 members • Free

AI Without The Hype

64 members • Free

Skoolers

188.9k members • Free

2 contributions to AI Without The Hype
The 4 parts of a good Copilot prompt
Happy 50+ members!!! 🎉🎉🎉 Huge thank you to all of you early founding members! it’s been great seeing you join, explore, comment and share🙏 I know quite a few of you are using Copilot, so here’s a quick bonus tip when you’re writing prompts: 💡Always include: 1. A GOAL: What are you trying to get out of it? 2. The CONTEXT: What are you working on / why does it matter? 3. The SOURCES: Where should it look? (e.g. emails, Teams chats, documents, websites) 4. Your EXPECTATIONS: How should it respond? ➡️Quick Example: Goal: generate 3–5 bullet points Context: preparing for a meeting with Manager X to update them on work progress Sources: focus on emails and Teams chats since June (if you want me to show you how to access your emails/chats, comment on this post!) Expectations: use simple language so I can get up to speed quickly It sounds simple, but it makes a big difference. 👇Let me know if you want more of these! And if you know someone who would find these tips useful, 🗣️ invite them over!
1 like • 27d
@Rachel Dbeis that’s an interesting take, and aligns with a lot of what I’ve been exploring too. The instruction vs role-based distinction is intriguing, my experience is that instruction prompts are naturally more constrained giving the model less room to behave differently, whereas role-based prompts lean heavily on the model’s priors and then that’s where I’ve started to see real variation in how structure lands. The Copilot point is fascinating to me. I’ve noticed the same thing but I’ve been wondering whether that’s a Copilot behaviour or whether it traces back to the underlying model and how it’s been tuned. My hunch is it’s the latter, but I’m not certain and would love to know if you or others have tested that deliberately. Something I’ve been exploring more is context layering, by that I mean not just how you structure a prompt but where information sits within it. Models don’t seem to attend to context uniformly and I’ve noticed ordering and prioritisation can produce quite different outputs across Claude, GPT and Gemini. Still work in progress though. Have you experimented with that side of things at all?
1 like • 25d
@Rachel Dbeis useful insight in the article which I’ve researched a bit further - it seems that since then the models have moved on but may still experience some issues with large context windows (long chats) and document content retrieval when attached to prompts. It seems that whilst it is a ‘watch out’ it’s no more so than any other hallucination.
Your Chatbot Isn’t Your Therapist
Interesting article summarising the growing concern about how people are using AI for emotional support and as alternative to therapy. For some people it makes sense. It’s almost safer to talk to AI, cheaper, and it’s always there, always responding in a calm, reassuring way. But much like what what happens on social media, AI can often create an echo chamber. Instead of helping people move through difficult spots, it can reinforce them. You ask the same question, you get a slightly reworded, comforting answer, and you feel better for a while. Then the doubt comes back, and you ask again. Over time, you just continue to rehearse the anxiety. Clinically, this is called mirroring beliefs (when sometimes they should be challenged). So if someone had distorted or unhelpful thoughts , the AI can make those thoughts more potent and validate them rather than question them. And of course, AI doesn’t challenge, push back or get frustrated. It’s not gonna say: “You’ve asked me this five times… something deeper is going on… let’s talk about it.” It’s that tension, that can be uncomfortable sometimes, that drives people to growth and progress through self reflection and seeking proper help. Alarmingly, the more time people spend with chatbots, the more likely they are to become emotionally dependent, isolated, and caught in repetitive thinking loops. This is why it’s really important to be conscious of how you use AI. If it’s helping you see something new, great. Just be aware if it’s causing you to avoid facing things. Some clinicians are even encouraging people to build in “speed bumps”… for example, instructing the chatbot not to give reassurance on certain worries, but instead to gently push them to sit with the discomfort. As with many tools, AI can either help you think more clearly or help you stay stuck more comfortably. ‏‼️Do you think AI should challenge users more even if it makes the experience less “pleasant”?
Your Chatbot Isn’t Your Therapist
2 likes • Apr 11
I saw something on LinkedIn about this the other day, I was really surprised but can understand why it happens given how many people anthropomorphise LLM’s.
1-2 of 2
Sean McLoughlin
2
15points to level up
@sean-mcloughlin-7300
20+ years in Ops. Now exploring AI in the real world, no hype, just practical ways to think better, work smarter, and get more from it.

Active 4h ago
Joined Apr 5, 2026