Yesterday I managed to get the full document into the vector database. Chunking, embeddings, the table setup in Supabase, everything worked. Yessss 😁
So today I want to take the next logical step.
I want to actually talk to the vectorized data. Chat with it.
Ask questions. See what kind of answers I get and how good the quality is, as far as I can judge at this point.
This is the exciting part for me. Yesterday was all about structure and setup. Today is about seeing whether the whole pipeline actually produces meaningful results. If the retrieval works. If the embeddings make sense. If the answers stay close to the source.
I know the answers won’t be perfect. But that is exactly why I’m doing this challenge.
Now that the data is in the vector store, the next step is to create the retrieval flow in n8n, send a query, look at the matches, and see how the LLM responds with context.
Let’s see how good or bad the first results will be.
Time to experiment again.