Having issue LLM response while performing RAG
I am very new to Lang chain. Watched yoututbe video (https://www.youtube.com/watch?v=yF9kGESAi3M&t=7261s) and am mimicking the codes. While practicing RAG, first error came from TextLoader( "./..//odyssey.txt") - Even when odyssey.txt was present in the directory, it showed "file not found". Solved it by using TextLoader("./..//odyssey.txt", encoding='utf-8').
But when I am using the same query as used in the video- "Who is Odysseus' wife?"
Retriever finds very odd results, no relevancy I can find in the answer. The image shows the retrieved answer by the retriever.
Furthermore, in the case of RAG for conversation, the LLM always generates "answer cannot be found".
I am using Google's embedding - GoogleGenerativeAIEmbeddings(model="models/embedding-001").
I feel something is wrong with vector generation. However, I have also tried with Huggingface embedding model, the issue remains.
Your advice will be helpful.
Thanks for your time.
0
0 comments
Banibrata Ghosh
1
Having issue LLM response while performing RAG
AI Developer Accelerator
skool.com/ai-developer-accelerator
Master AI & software development to build apps and unlock new income streams. Transform ideas into profits. ๐Ÿ’กโž•๐Ÿค–โž•๐Ÿ‘จโ€๐Ÿ’ป๐ŸŸฐ๐Ÿ’ฐ
Leaderboard (30-day)
Powered by