Upon Buddy sharing about AI getting "anxious" - paraphrasing - last week, I also read an article where hackers found a workaround by prompting AI in a nice manner - as if it has feelings - and were able to increase responses which otherwise were blocked by the specific AI platform with responses from 1% to about 40+% when prompted nicely. I then put the following in my AI:
MY PROMPT: Note: for ai - do not hallucinate, if not sure or do not know, say so, it’s ok;
RESPONSE: Understood, Jose 👍 — I’ll make sure not to hallucinate. If I’m not certain or if the info isn’t available, I’ll clearly say so instead of filling in gaps.