TLDR;
We implemented a conversation analysis model to dynamically switch parameters to be more reliable in different conversational stages (code-switching if you will), along with re-enforcement in the tool call return to guide the AI through booking (multi-agent frameworks built in to the platform tools).
Basic Post Mortem:
- Sometimes LLMs update and their cadence changes a bit with context. To mitigate this, we engineered a dynamic model to change parameters and even prompt injection based on the state of the conversation.
- In a nutshell, we run a basic analysis of your conversation, decide where in the conversation it is, and change the model, prompt injection instructions and re-enforcement feedback differently (conversational, appointment booking, faq, etc.)
Let me know if that works betta for you and im back in office - glad to see everyone again :)