Open Matthieu-Tinycoaching opened 1 year ago
There does not seem to be a conversation template for this model in conversation.py. Just curious, which template is getting loaded by default? Can you specify which prompts you used? In my past experience with Vicuna enclosing the information between the start tag "Helpful Information" and end tag "End of Helpful Information" helps the model to reduce hallucinations. e.g. here. Finetuning helps too.
Also "I don't know" might be easier to implement on vector search side. If the cosine similarity of all matched documents is less than a threshold, the output can be "I don't know". That's just my thoughts.
Hi,
Willing to use the
fastchat-t5-3b-v1.0
model for RAG or Retrieval Augmented Generation (Q&A based on retrieved documents or context), I tried many prompts based on FastChat code but never managed to obtain a good answer following these criteria:Does anyone has an advice for using
fastchat-t5-3b-v1.0
for RAG?