-
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
运行cli_demo.py后输入“你好”报错
Explicitly passing a `revision` is encouraged when loading a model …
-
-
"Firstly, put LLaMA model files under models/LLaMA-HF/ and ChatGLM-6b model files under models/chatglm-6b/."
How can I get these files?
-
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
报错为
load_model_config modle\chatglm-6b-int4...
Loading modle\chatglm-6b-int4...
No compile…
-
▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒ Traceback (most recent call last) ▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒
▒ /ChatGLM-6B/ChatGLM_math/chatglm_maths/t10_toy_trl_train_ppo.py:215 in ▒
▒ …
-
第一次执行微调脚本
```
./finetune/finetune_visualglm.sh
```
模型都放在项目根目录 的THUDM目录里面
```
ll THUDM/
total 8
lrwxrwxrwx 1 yaokj5 yaokj5 38 Sep 1 10:38 chatglm-6b -> /xxx/models/chatglm-6…
-
**chatglm2-6b使用fastllm通过README,有如下两种方式:**
**方式一:**
```
# 这是原来的程序,通过huggingface接口创建模型
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("THUDM/chatgl…
-
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
单机微调int4的chatglm模型,在模型加载时出现错误,提示信息:Only Tensors of floating point and complex dtype can requi…
-
咨询一下,训练后的模型,怎么在Chatglm-6b上使用?
-
学生党在colab上尝试了您的代码 但是显存只有15G 没法load这个6B的模型。
我把代码中的chatGLM-6B替换成chatGLM-6B-int4 想看看能否微调
但是 trainer.train()那里一直报错 self and mat2 must have same dtype
我怀疑可能是模型经过量化后其参数的精度和batch输入精度不匹配。
想请教您,如何修改代码,实现在量…