This is a follow-up to my original âWine Glass Testâ â a simple experiment that turned into something more interesting.
After my first post, I received a thoughtful suggestion from . His advice was straightforward:
Be more prescriptive.
So I refined the prompt to this:
âCreate a glass of wine that is full, red wine. It needs to be at the brim, so not to run over, and not below the brim to show any space between the brim and the surface of the wine in the glass.â
The image below is the direct result.
And the result is telling.
đˇ What This Actually Proves
This wasnât about aesthetics.
It was about bias and instruction.
When I originally asked for a âfull glass of wine,â the model produced what most restaurants would call full â but still left space at the top.
Thatâs not an error.
Thatâs statistical bias.
The model leaned into the most common interpretation of âfull.â
When the instruction became extreme and structured, the behavior changed. It complied precisely.
đˇ There are two observations that I see with this test:
1ď¸âŁ Prompting Is a Skill
We often talk about model bias as if itâs a flaw.
Itâs not.
Itâs probability doing what probability does.
My first prompt allowed the model to default to âstandard pour.â
The refined prompt removed ambiguity.
By defining the boundary conditions â no gap, no overflow â the model had to break from its average tendency and execute exactly.
Thatâs not luck.
Thatâs instruction design.
Prompting isnât just writing a sentence.
Itâs mapping expectation into structure.
And as Matthew pointed out, that skill develops iteratively.
2ď¸âŁ Natural Language Still Has Friction
The deeper takeaway isnât that the model can create a perfectly full glass.
Itâs that everyday language is still ambiguous to it.
When a human says âfull glass of wine,â we infer intent through context.
The model infers through probability.
Those are not the same.
For AI to feel seamless in daily life, we shouldnât need to mathematically define âfull.â
But today, precision still requires precision.
đˇ The Real Distinction
As a business tool, Nano Banana 2 performs extremely well when given structured, constraint-based instructions.
But as an everyday assistant, it still exposes the gap between statistical understanding and semantic understanding.
That gap is where the next layer of AI progress will happen.
The interesting part of this test isnât the wine.
Itâs the friction between language and logic.
And every time we refine a prompt, weâre not just training the model.
Weâre training ourselves.