-
Hello - I notice that the chat arena version of fastchat-t5-3b-v1.0 provides quite different answers from the model compared to when it is downloaded manually and run using fastchat.serve.cli --model-…
-
lmsys.org states that FastChat-T5 supports a context size of 4K. How do I get it to work? I get an error as soon as I go above 2K.
-
Is there anyway to run it in 4G or less vram?
ggml? or gptq?
-
Human: 你是谁
Assistant: :,,,,
-
**CPU model working fine**
python -m fastchat.serve.cli --model-path /path/to/fastchat-t5-3b-v1.0 --device cpu
**GPU models both got errors**
python -m fastchat.serve.cli --model-path /path/to/fa…
-
Hi, I'm trying to use fastchat-t5-3b-v1.0 on macOS following the instructions in the README.
```
Simply run the line below to start chatting. It will automatically download the weights from a Hugg…
-
What are all the languages present in the ShareGPT 70,000 conversation dataset which was used to fine-tune FastChat-T5?
The ReadMe file points to [`data_cleaning.md`](https://github.com/lm-sys/Fast…
-
下载项目,下载依赖, 都是正常
首次运行项目: python3 -m fastchat.serve.cli --model-path lmsys/fastchat-t5-3b-v1.0 能正常运行, 文件缓存的目录: ~/.cache/huggingface/hub/models--lmsys--fastchat-t5-3b-v1.0
但是通过网页下载模型: https://hugging…
-
#### I'm attempting to fine-tuning FastChat T5 locally using the command:
torchrun --nproc_per_node=1 --master_port=9778 fastchat/train/train_flant5.py \
--model_name_or_path {my_path}/test_fa…
-
Not really an bug with Transformers.js, but with the conversion script.
Got an error when trying to convert [lmsys/fastchat-t5-3b-v1.0](https://huggingface.co/lmsys/fastchat-t5-3b-v1.0) with `text2…