Private chat with local GPT with document, images, video, etc. 100% private, Apache 2.0. Supports oLLaMa, Mixtral, llama.cpp, and more. Demo: https://gpt.h2o.ai/ https://gpt-docs.h2o.ai/
When I use lmsys/fastchat-t5-3b-v1.0 for inferencing through documents it doesn't generate responses at all and takes a lot of time to generate responses & gives the same answer for all the queries.
I just want to know why it's happening at all and how I can resolve it.
When I use lmsys/fastchat-t5-3b-v1.0 for inferencing through documents it doesn't generate responses at all and takes a lot of time to generate responses & gives the same answer for all the queries. I just want to know why it's happening at all and how I can resolve it.