Hi, kinda off topic to ask spot but I figured this was the place to ask. I noticed that the ask spot chatbot is great at not hallucinating. If you prompt it to not give information that doesn’t exist, then it won’t give false information. Do any of you know how this can be done better with LLMs such as ChatGPT?