-
root@autodl-container-9ed51187fa-29111776:~/autodl-tmp# bash finetune/finetune_visualglm.sh
NCCL_DEBUG=info NCCL_IB_DISABLE=0 NCCL_NET_GDR_LEVEL=2 deepspeed --master_port 16666 --hostfile hostfile_si…
-
运行下面代码时报错
import argparse
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True)
from model import chat, VisualGLMModel
model…
-
在所有环境全部搭建好之后运行web_demo.py然后报错:sat.model.transformer.BaseTransformer() got multiple values for keyword argument 'parallel_output' 这个该怎么解决
-
您好,我在运行 model, model_args = VisualGLMModel.from_pretrained('visualglm-6b', args=argparse.Namespace(fp16=True, skip_init=True))这句代码时提示没找到zip文件,zip文件的链接是清华云盘的:https://cloud.tsinghua.edu.cn/f/348b98dffcc…
-
NCCL_DEBUG=info NCCL_IB_DISABLE=0 NCCL_NET_GDR_LEVEL=2 deepspeed --master_port 16666 --hostfile hostfile_single finetune_visualglm.py --experiment-name finetune-visualglm-6b --model-parallel-size 1 --…
-
我使用api的时候碰到这个问题:
```
[2023-09-13 19:47:17,004] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2023-09-13 19:47:22,544] [INFO] building VisualGLMModel…
-
![图片](https://github.com/THUDM/VisualGLM-6B/assets/74488961/da30af38-780b-4783-88a7-894c7935f28f)
-
我设置会出现mp_size=2以后进行lora训练,会出现维度不匹配的问题
-
![image](https://github.com/THUDM/VisualGLM-6B/assets/64970397/f1318b9f-4b17-4035-8011-f90d0f457528)
-
I'm trying to run the API mode. Copied model data from hugging face. Added the following to the api.py:
```
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pret…