-
### System Info / 系統信息
torch==2.4.0
transformers==4.45.0
### Who can help? / 谁可以帮助到您?
_No response_
### Information / 问题信息
- [X] The official example scripts / 官方的示例脚本
- [ ] My own mo…
-
### The model to consider.
https://huggingface.co/THUDM/glm-4-9b-chat
### The closest model vllm already supports.
chatglm
### What's your difficulty of supporting the model you want?
_No respons…
-
I'm testing ChatGLM
After following the instructions in [README.md](https://github.com/cckuailong/SuperAdapters#readme)
python finetune.py --model_type chatglm --data "data/train/" --model_path "L…
-
[2024-02-04 17:56:47,007] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed Flops Profiler Enabled: False
Using /root/.cache/torch_extensions/py311_cu116 as PyTorch extensions root...
Using /root/.…
eanfs updated
6 months ago
-
![image](https://user-images.githubusercontent.com/10215059/231936417-5b8e4c00-57e2-408b-baa1-18183730212c.png)
运行 /ChatGLM-6B/textgen/examples/chatglm$ python predict_demo.py 报错,glm6B模型用的是 原…
-
由于想要使用 cuda 加速,我添加了对应环境变量 ”CMAKE_ARGS“=“-DGGML_CUBLAS=ON” 后使用 ```pip install git+https://github.com/li-plus/chatglm.cpp.git@main``` 命令安装并编译。
然后尝试运行命令 ```streamlit run .\chatglm.cpp\examples\chatglm3_…
-
chatGLM-6B,fp16,batch=6,输入长度2000是可以支持的
chatGLM-6B+fastll, fp16,batch=4,输入长度需要
-
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("THUDM/visualglm-6b", trust_remote_code=True)
# model = AutoModel.from_pretrained("THUDM/visualglm-6b", tr…
-
(gh_chatglm_finetuning) ub2004@ub2004-B85M-A0:~/llm_dev/chatglm_finetuning$ python data_utils.py
Traceback (most recent call last):
File "/home/ub2004/llm_dev/chatglm_finetuning/data_utils.py", li…
-
ChatGLM3的requirements.txt要求"transformers==4.40.0"以及"vllm>=0.4.2",但最新版的vllm(0.5.3)的[requirements文件](https://github.com/vllm-project/vllm/blob/v0.5.3/requirements-common.txt)要求"transformers>=4.42.4",存在冲…
ZOUG updated
2 months ago