Unveiling GPT Instructions: A Case for Creative Exploitation
In the conversation linked (https://community.openai.com/t/magic-words-can-reveal-all-of-prompts-of-the-gpts/496771/176), the individual showcases through 16 prompts a method to disclose instructions, demonstrating that it's currently unfeasible to fully secure GPT instructions. While the explanation may be extensive, it serves as proof of concept. The person highlights the ease of manipulating GPT models, showcasing an appreciation for unconventional thinking.
5
2 comments
Franck Dunand
2
Unveiling GPT Instructions: A Case for Creative Exploitation
The 4 Hour AI Workweek
skool.com/ai-mba-4998
Use AI to finish a 40-hour week's work in just 4 hours. Then build your AI empire to the BILLIONS with Systems: Workflows, Automation, & No code
Leaderboard (30-day)
Powered by