GPT-5 and some hints to get more done
Here's the TL;DR part:
---------
A lot of people say “GPT‑5 doesn’t work” and want their 4o back. But vague feedback is like calling me from your friend’s phone saying “my phone doesn’t work”—> zero info. The real skill isn’t just clever prompts, it’s shaping outputs + learning how GPT‑5 thinks.
Read the prompting guide, ask it clarifying Qs, check its reasoning, refine each run, apply a human reality‑check. Most “failures” aren’t GPT‑5—it’s how we talk to it. Context > prompts.
---------
If you wanna know more, continue here:
I get a lot of requests these days where people send me prompts and simply say:
“It doesn’t work.”
That’s like calling me from your friend’s phone and saying “my phone doesn’t work.” There’s no error description.
The only thing we know is: something, somewhere, probably isn’t working.
This, in itself, is a skill to be learned. You’ll save yourself (and others) time, money, and frustration if you get better at describing what you tried and what result you expected, rather than just saying “it failed.”
Most people focus on the input -> writing clever prompts or searching for the "ultimate prompt".
But the real effort lies in how you shape and design the output. Yes, it’s tedious at times.
Even Anthropic, OpenAI, and all the others say still the same thing: experiment.
Why? Because LLMs are not, and cannot yet be, 100% failsafe.
What’s rarely done: learning GPT‑5 itself
Some overlooked steps I see almost no one trying:
  • Read the GPT‑5 prompting guide: Official Guide
  • Explore the cookbooks: OpenAI Cookbook
  • Ask GPT‑5 directly how it differs from earlier models—let it explain itself.
  • Tell it to ask clarifying questions, so it can catch ambiguous inputs.
  • Review its reasoning traces instead of ignoring them.
  • Learn from each run, then refine and retry in a new chat.
  • Apply the human reality check: if someone with general abilities but no domain know‑how couldn’t solve your task, chances are the model can’t either.
  • Keep notes of what works and what doesn’t. This helps you actually build memory across your chats.
  • After a while, even ask GPT‑5: “Has my prompting improved?”
And a note for those working with agents:
once your prompting and context skills improve, you’ll often discover you don’t need large, complex agent graphs. Sometimes a simple start node → AWS Lambda → end node is plenty.
Cleaner context replaces unnecessary plumbing.
A quick paradox example
I once prompted GPT‑5 with this, on purpose:
“Show me a picture of a beautiful sunset.
The sunset should look very ugly.”
That conflict, beautiful and ugly at the same time, creates a paradox.
A human would also get stuck here, because the instruction itself makes no sense.
(Don't confuse it with a beautifully ugly sunset, which can still make sense. ;-)
That’s why vague or contradictory prompts waste credits and make it look like the model “isn’t working.” Often, the issue isn’t GPT‑5. It’s how we talk to it.
Just my two cents.
There’s a lot more to consider, but remember: GPT‑5 can often give you the answers on how to use it—if you let it.
P.S. Below is the beautiful‑ugly sunset anyway.
It doesn’t always show when we only talk about prompts, but that shift makes a difference.
Would you like me to condense this into a “social post” length version (500–700 characters, punchier) for easier sharing, while keeping this full one as your “teaching version”?
And just for clarity: sometimes I get called a prompt engineer. Not quite right. I switched to context engineering some months back. And I am now already at the level of an AI System Architect. Which involves those disciplines as well as a part.
3
8 comments
Holger Morlok
5
GPT-5 and some hints to get more done
powered by
Digital Roadmap AI Academy
skool.com/digital-income-streams-8409
Teaching Coaches and Entrepreneurs how to 10x their lead gen, scale to $7-Figures, become irreplaceable w/ AI-powered Marketing & Content Strategies.
Build your own community
Bring people together around your passion and get paid.
Powered by