-
I ma trying to run the fastchat docker but the model never gets loaded by the model-worker. Error below
```
fastchat-docker-fastchat-worker-1 | 2023-09-26 08:04:49 | INFO | model_worker | args:…
-
Thank you for providing the wonderful repo.
The performance of fastchat-t5-3b is surprisingly good. I am applying it for close-book QA (answering questions using the context provided). I see a hug…
ghost updated
11 months ago
-
I think `UnboundLocalError: local variable 'stopped' referenced before assignment` is a bug in code
```
ibot-model-worker-1 | 2023-10-25 02:02:09 | ERROR | stderr | ERROR: Exception in ASGI appl…
-
It seems like when I have different workers with different models, I still only see one of them.
Like here, I have a worker on port 21002 and one worker at port 31001. Both are on the same machine …
-
When I use lmsys/fastchat-t5-3b-v1.0 for inferencing through documents it doesn't generate responses at all and takes a lot of time to generate responses & gives the same answer for all the queries.
I…
-
Whenever i try to query through PDF using lmsys/fastchat-t5-3b-v1.0 then it always gave sources as a answer. I tried different different prompts according to the [https://github.com/h2oai/h2ogpt/blob…
-
After installing the requirements, I tried to run the following inside `~/AgentBench`.
```bash
python -m eval --task configs/tasks/mind2web/dev.yaml --agent configs/agents/do_nothing.yaml
```
…
-
What's the problem? (if there are multiple - list as bullet points)
"Prompt templates: One thing you missed: I really don't think the way you are implementing support for HF models is good. From the …
-
Hi,
I tried to convert and use the `lmsys/fastchat-t5-3b-v1.0` model, which is an open-source chatbot trained by fine-tuning Flan-t5-xl (3B parameters) on user-shared conversations collected from S…
-
potential fix: use double llm method
user input & PTKBs →LLama2 response →BM25/Pyserni to get passages →use Fastchat T5 Summerazation LLM to summerize passages →gen response from summaries and LLa…