THUDM / ChatGLM2-6B

ChatGLM2-6B: An Open Bilingual Chat LLM | 开源双语对话语言模型
Other
15.73k stars 1.85k forks source link

[BUG/Help] <title>peft , deepspeed 下 p-tuning , AttributeError: 'NoneType' object has no attribute 'shape' #675

Open xxhh1212 opened 7 months ago

xxhh1212 commented 7 months ago

Is there an existing issue for this?

Current Behavior

在进行pfet 和deepspeed模型下训练时间,参数好像没有穿进来。 Traceback (most recent call last): File "/root/autodl-fs/xxhh/train_temp.py", line 124, in outputs = model(batch, use_cache=False) File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1519, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1529, in _call_impl return forward_call(args, kwargs) File "/root/miniconda3/lib/python3.10/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn ret_val = func(*args, kwargs) File "/root/miniconda3/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 1807, in forward loss = self.module(*inputs, *kwargs) File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1519, in _wrapped_call_impl return self._call_impl(args, kwargs) File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1529, in _call_impl return forward_call(args, kwargs) File "/root/miniconda3/lib/python3.10/site-packages/peft/peft_model.py", line 1178, in forward return self.base_model(inputs_embeds=inputs_embeds, kwargs) File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1519, in _wrapped_call_impl return self._call_impl(args, kwargs) File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1529, in _call_impl return forward_call(*args, *kwargs) File "/root/autodl-fs/xxhh/glm2/modeling_chatglm.py", line 943, in forward transformer_outputs = self.transformer( File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1519, in _wrapped_call_impl return self._call_impl(args, kwargs) File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1529, in _call_impl return forward_call(*args, **kwargs) File "/root/autodl-fs/xxhh/glm2/modeling_chatglm.py", line 807, in forward batch_size, seq_length = input_ids.shape AttributeError: 'NoneType' object has no attribute 'shape'

Expected Behavior

No response

Steps To Reproduce

print("设置deepspeed参数") model, optimizer, _, lr_scheduler = deepspeed.initialize(model=model, args=args, config=ds_config, dist_init_required=True) model.train()

创建文件夹

path = args.save_loss_path print(path) os.makedirs(path, exist_ok=True)

global_step = 0

for epoch in range(args.num_train_epochs): save_loss_file = open(args.save_loss_path + "epoch-{}.txt".format(epoch) , mode="w" , encoding="utf-8")

patience_counter = 0  # 用于跟踪连续无改进步数
best_loss = float('inf')  # 初始化最佳Loss为无穷大

model.train()
for step, batch in tqdm(enumerate(train_dataloader), total=len(train_dataloader), unit="batch"):
    batch["input_ids"] = batch["input_ids"].to("cuda")
    batch["labels"] = batch["labels"].to("cuda")

    outputs = model(**batch, use_cache=False)
    loss = outputs.loss

    if args.gradient_accumulation_steps > 1:
        loss = loss / args.gradient_accumulation_steps
    model.backward(loss)
    torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0)

训练部分代码如下。

Environment

- OS:Ubuntu 20.04
- Python:3.10
- Transformers:4.33.0
- PyTorch:2.1.0
- CUDA Support (`python -c "import torch; print(torch.cuda.is_available())"`) :

Anything else?

No response