Drift? Fuzzy Memory? Inputs Weighted Improperly? Help... ; )
I had this very same experience(s) with ChatGPT (sic). The more and more I tried to refine GPT's output, to capture a slight nuance, the further GPT drifted away from the core theme. It "over applied" what was meant to be fine-tuning and altered the message by placing too much weight/emphasis on the "most recent" input from me. So what was meant be be nuance-adjustment and 'dialing it in' became overwrite. Quite maddening to say the least. Somebody help us, please! Is this because I don't have "memories" on (as Igor advised)? Are my commands wrong? Is this the 90/10 part AI Surfer mentions and it's simply just better to fine tune myself in a separate document rather than dialing in my thought partner? I hoped to avoid doing this over and over, my rationale being if I invest the time now, it will learn my nuances creating a system-like cadence so I don't have to do it next time. At any rate, it consumed hours and hours running down rabbit holes...I'd like to solve it for me AND prevent it from happening to others. Hopefully some of you are further down the path and willing to help... PS: I originally replied to another member's comment with this narrative, thought it is worthy to create a new post to surface it to the group: seems like at least a couple of us are having similar challenges. Thanks in advance!