Having issue LLM response while performing RAG
I am very new to Lang chain. Watched yoututbe video (https://www.youtube.com/watch?v=yF9kGESAi3M&t=7261s) and am mimicking the codes. While practicing RAG, first error came from TextLoader( "./..//odyssey.txt") - Even when odyssey.txt was present in the directory, it showed "file not found". Solved it by using TextLoader("./..//odyssey.txt", encoding='utf-8'). But when I am using the same query as used in the video- "Who is Odysseus' wife?" Retriever finds very odd results, no relevancy I can find in the answer. The image shows the retrieved answer by the retriever. Furthermore, in the case of RAG for conversation, the LLM always generates "answer cannot be found". I am using Google's embedding - GoogleGenerativeAIEmbeddings(model="models/embedding-001"). I feel something is wrong with vector generation. However, I have also tried with Huggingface embedding model, the issue remains. Your advice will be helpful. Thanks for your time.