h2oai / h2ogpt

Private chat with local GPT with document, images, video, etc. 100% private, Apache 2.0. Supports oLLaMa, Mixtral, llama.cpp, and more. Demo: https://gpt.h2o.ai/ https://gpt-docs.h2o.ai/
http://h2o.ai
Apache License 2.0
11.31k stars 1.24k forks source link

lmsys/fastchat-t5-3b-v1.0 #650

Closed hemantkumar0506 closed 1 year ago

hemantkumar0506 commented 1 year ago

When I use lmsys/fastchat-t5-3b-v1.0 for inferencing through documents it doesn't generate responses at all and takes a lot of time to generate responses & gives the same answer for all the queries. I just want to know why it's happening at all and how I can resolve it.

pseudotensor commented 1 year ago

I'd expect the prompt_type is wrong. See https://github.com/h2oai/h2ogpt/blob/main/docs/FAQ.md#adding-models