Unveiling GPT Instructions: A Case for Creative Exploitation
In the conversation linked (https://community.openai.com/t/magic-words-can-reveal-all-of-prompts-of-the-gpts/496771/176), the individual showcases through 16 prompts a method to disclose instructions, demonstrating that it's currently unfeasible to fully secure GPT instructions. While the explanation may be extensive, it serves as proof of concept. The person highlights the ease of manipulating GPT models, showcasing an appreciation for unconventional thinking.