-
got error while running the `"python3 -m fastchat.serve.cli --model-path lmsys/fastchat-t5-3b-v1.0"` command
ValueError: Unrecognized configuration class for this kind of
AutoModel: AutoModelForC…
-
### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and found no similar bug report.
### YOLOv5 Component
PyTorch Hub
### Bug
I am lo…
-
Hi,
I am building a chatbot using LLM like fastchat-t5-3b-v1.0 and want to reduce my inference time.
I am loading the entire model on GPU, using device_map parameter, and making use of hugging f…
-
-
Steve says the token responses seem a bit lacking. Try to find a way to increase response size effectively.
-
Hi,
Below is the code that I am using to do inference on Fastchat LLM.
```
from llama_index import GPTListIndex, SimpleDirectoryReader, GPTVectorStoreIndex, PromptHelper, LLMPredictor
from langc…
-
Got this error during saving the model after training....
![cuda_error](https://github.com/lm-sys/FastChat/assets/86181705/04d78031-17d8-4985-bd08-d3ffe17505ac)
Also, while trying to load a chec…
-
Hi,
I am facing issue while giving prompt to LLM models like oasst-pythia-12b, fastchat-t5-3b.
I provided a prompt question along with context but I see that prompt is getting clipped off while p…
-
I want to self how open ai API with fastchat and I see this [example](https://github.com/lm-sys/FastChat/blob/main/docs/langchain_integration.md) but i see a error.
I can use the seconde command `p…
-
1. **terminal 1** - `python3.10 -m fastchat.serve.controller --host localhost --port PORT_N1`
2. **terminal 2** - `CUDA_VISIBLE_DEVICES=0 python3.10 -m fastchat.serve.model_worker --model-path /lmsys…