Rag-based LLM Chatbot - Problems
Hello all!
I am using mistral model for building a chatbot assistant. I am facing problems with the accuracy of the model. Sometimes it doesn't it respond with the retrieved context from the documents and just gives info from outside rag, and it doesn't stick to the prompt guideline after 5 queries.
It's really frustrating I've tried changing the prompt style and even tried prompt chaining but the quality of the response is very low.
What's the solution for this? Or have you faced a similar problem?
3
9 comments
Rishita Umasankaran
4
Rag-based LLM Chatbot - Problems
Data Alchemy
skool.com/data-alchemy
Your Community to Master the Fundamentals of Working with Data and AI — by Datalumina®
Leaderboard (30-day)
Powered by