taishan1994 / Llama3.1-Finetuning

对llama3进行全参微调、lora微调以及qlora微调。
Apache License 2.0
141 stars 14 forks source link

RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:1! #4

Closed super-buster closed 5 months ago

super-buster commented 5 months ago

你好,我在单机多卡的时候遇到了这个问题,在原先用fschat训练的框架基础上只把preprocess函数给换了(可以确保其它部分没有问题)。参考你的preprocess做了一点点改动,第一,使用了两个 nl_tokens 去做分割,第二,统一使用eot_id去替代所有的tokenizer.pad_token_id。我训练脚本用的是deepspeed zero3启动。开始我以为是数据处理需要转为gpu上的tensor,尝试加载到gpu上,这个问题依然是存在的,并且我对比了一下fschat的实现,这个函数返回的数据就是在cpu上。最后是在计算loss的时候出现的这个问题,想问一下有可能是啥原因呢?

def preprocess(
    sources,
    tokenizer: transformers.PreTrainedTokenizer,
    system_message: str = "You are a pirate chatbot who always responds in pirate speak!"
) :
    begin_of_text_id = tokenizer.get_vocab()["<|begin_of_text|>"]
    start_header_id = tokenizer.get_vocab()["<|start_header_id|>"]
    end_header_id = tokenizer.get_vocab()["<|end_header_id|>"]
    eot_id = tokenizer.get_vocab()["<|eot_id|>"]
    nl_tokens = tokenizer('\n').input_ids
    _system = tokenizer('system').input_ids
    _user = tokenizer('user').input_ids
    _assistant = tokenizer('assistant').input_ids

    # Apply prompt templates
    input_ids, targets = [], []
    for i, source in enumerate(sources):
        input_id, target = [], []
        system = [begin_of_text_id] + [start_header_id] + _system + [end_header_id] + nl_tokens + nl_tokens + tokenizer(system_message).input_ids + [eot_id]
        input_id += system
        target += [IGNORE_TOKEN_ID] * len(input_id)
        assert len(input_id) == len(target)
        for j, sentence in enumerate(source):
            role = sentence["from"]
            value = sentence["value"]
            if role == 'human':
                _input_id = [start_header_id] + _user + [end_header_id] + nl_tokens + nl_tokens + tokenizer(value).input_ids + [eot_id]
                _target = [IGNORE_TOKEN_ID] * len(_input_id)
            elif role == 'gpt':
                _input_id = [start_header_id] + _assistant + [end_header_id] + nl_tokens + nl_tokens + tokenizer(value).input_ids + [eot_id]
                _target = [IGNORE_TOKEN_ID] + [IGNORE_TOKEN_ID] * len(_assistant) + \
                          [IGNORE_TOKEN_ID] + [IGNORE_TOKEN_ID] + [IGNORE_TOKEN_ID] + tokenizer(value).input_ids + [eot_id]
            else:
                raise NotImplementedError
            input_id += _input_id
            target += _target
        # print(input_id)
        # print(target)
        # print(tokenizer.decode(input_id))
        # print(len(input_id), len(target))
        assert len(input_id) == len(target)
        assert len(input_id) < tokenizer.model_max_length, "Tokenization Error: input len is larger than model's max length!"
        input_id += [eot_id] * (tokenizer.model_max_length - len(input_id))
        target += [IGNORE_TOKEN_ID] * (tokenizer.model_max_length - len(target))
        input_ids.append(input_id[:tokenizer.model_max_length])
        targets.append(target[:tokenizer.model_max_length])
    input_ids = torch.tensor(input_ids, dtype=torch.int64)
    targets = torch.tensor(targets, dtype=torch.int64)

    return dict(
        input_ids=input_ids,
        labels=targets,
        attention_mask=input_ids.ne(eot_id),
    )

详细报错如下:

 File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 1859, in train
  File "/share/yanzhongxiang/projects/latest/FastChat/fastchat/train/train_llama3.py", line 361, in train
    return inner_training_loop(
  File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 2203, in _inner_training_loop
    trainer.train()
      File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 1859, in train
trainer.train()
  File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 1859, in train
    tr_loss_step = self.training_step(model, inputs)
  File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 3147, in training_step
        return inner_training_loop(return inner_training_loop(

  File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 2203, in _inner_training_loop
  File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 2203, in _inner_training_loop
    train()
  File "/share/yanzhongxiang/projects/latest/FastChat/fastchat/train/train_llama3.py", line 361, in train
    self.accelerator.backward(loss)
  File "/usr/local/lib/python3.10/dist-packages/accelerate/accelerator.py", line 2007, in backward
        tr_loss_step = self.training_step(model, inputs)tr_loss_step = self.training_step(model, inputs)

  File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 3147, in training_step
  File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 3147, in training_step
    self.deepspeed_engine_wrapped.backward(loss, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/accelerate/utils/deepspeed.py", line 175, in backward
    self.engine.step()
  File "/usr/local/lib/python3.10/dist-packages/deepspeed/runtime/engine.py", line 2169, in step
        self.accelerator.backward(loss)self.accelerator.backward(loss)

  File "/usr/local/lib/python3.10/dist-packages/accelerate/accelerator.py", line 2007, in backward
  File "/usr/local/lib/python3.10/dist-packages/accelerate/accelerator.py", line 2007, in backward
    trainer.train()
  File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 1859, in train
    self._take_model_step(lr_kwargs)
  File "/usr/local/lib/python3.10/dist-packages/deepspeed/runtime/engine.py", line 2075, in _take_model_step
        self.deepspeed_engine_wrapped.backward(loss, **kwargs)self.deepspeed_engine_wrapped.backward(loss, **kwargs)

  File "/usr/local/lib/python3.10/dist-packages/accelerate/utils/deepspeed.py", line 175, in backward
  File "/usr/local/lib/python3.10/dist-packages/accelerate/utils/deepspeed.py", line 175, in backward
        self.engine.step()self.engine.step()

  File "/usr/local/lib/python3.10/dist-packages/deepspeed/runtime/engine.py", line 2169, in step
  File "/usr/local/lib/python3.10/dist-packages/deepspeed/runtime/engine.py", line 2169, in step
    return inner_training_loop(
  File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 2203, in _inner_training_loop
    self.optimizer.step()
  File "/usr/local/lib/python3.10/dist-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn
    ret_val = func(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/deepspeed/runtime/zero/stage3.py", line 2047, in step
    self._take_model_step(lr_kwargs)
  File "/usr/local/lib/python3.10/dist-packages/deepspeed/runtime/engine.py", line 2075, in _take_model_step
    self._take_model_step(lr_kwargs)
  File "/usr/local/lib/python3.10/dist-packages/deepspeed/runtime/engine.py", line 2075, in _take_model_step
    tr_loss_step = self.training_step(model, inputs)
  File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 3147, in training_step
    self.unscale_and_clip_grads(sub_group_id, scaled_global_grad_norm)
  File "/usr/local/lib/python3.10/dist-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn
    ret_val = func(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/deepspeed/runtime/zero/stage3.py", line 2117, in unscale_and_clip_grads
    self.optimizer.step()
  File "/usr/local/lib/python3.10/dist-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn
    ret_val = func(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/deepspeed/runtime/zero/stage3.py", line 2047, in step
    self.optimizer.step()
  File "/usr/local/lib/python3.10/dist-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn
    ret_val = func(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/deepspeed/runtime/zero/stage3.py", line 2047, in step
    self.fp32_partitioned_groups_flat[sub_group_id].grad.mul_(1. / combined_scale)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cpu!
    self.accelerator.backward(loss)
  File "/usr/local/lib/python3.10/dist-packages/accelerate/accelerator.py", line 2007, in backward
    self.unscale_and_clip_grads(sub_group_id, scaled_global_grad_norm)
  File "/usr/local/lib/python3.10/dist-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn
    ret_val = func(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/deepspeed/runtime/zero/stage3.py", line 2117, in unscale_and_clip_grads
    self.unscale_and_clip_grads(sub_group_id, scaled_global_grad_norm)
  File "/usr/local/lib/python3.10/dist-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn
    ret_val = func(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/deepspeed/runtime/zero/stage3.py", line 2117, in unscale_and_clip_grads
    self.deepspeed_engine_wrapped.backward(loss, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/accelerate/utils/deepspeed.py", line 175, in backward
    self.fp32_partitioned_groups_flat[sub_group_id].grad.mul_(1. / combined_scale)
    RuntimeErrorself.engine.step(): 
Expected all tensors to be on the same device, but found at least two devices, cuda:3 and cpu!
  File "/usr/local/lib/python3.10/dist-packages/deepspeed/runtime/engine.py", line 2169, in step
    self.fp32_partitioned_groups_flat[sub_group_id].grad.mul_(1. / combined_scale)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
    self._take_model_step(lr_kwargs)
  File "/usr/local/lib/python3.10/dist-packages/deepspeed/runtime/engine.py", line 2075, in _take_model_step
    self.optimizer.step()
  File "/usr/local/lib/python3.10/dist-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn
    ret_val = func(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/deepspeed/runtime/zero/stage3.py", line 2047, in step
    self.unscale_and_clip_grads(sub_group_id, scaled_global_grad_norm)
  File "/usr/local/lib/python3.10/dist-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn
    ret_val = func(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/deepspeed/runtime/zero/stage3.py", line 2117, in unscale_and_clip_grads
    self.fp32_partitioned_groups_flat[sub_group_id].grad.mul_(1. / combined_scale)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:2 and cpu!
taishan1994 commented 5 months ago

启动指令是什么

super-buster commented 5 months ago

启动指令是什么 deepspeed --master_port=$MASTER_PORT /share/projects/latest/FastChat/fastchat/train/train_llama3.py \ --model_name_or_path /share/yanzhongxiang/cpfs_models/models--NousResearch--Meta-Llama-3-8B-Instruct \ --data_path /share/datas/sft/processed/0103-tPkkg/lima_vicuna.json \ --bf16 True \ --output_dir /share/models/llama3_0515-$random_string-test \ --num_train_epochs 3 \ --per_device_train_batch_size 1 \ --per_device_eval_batch_size 1 \ --gradient_accumulation_steps 4 \ --evaluation_strategy "no" \ --save_strategy "epoch" \ --save_total_limit 3 \ --model_max_length 8192\ --learning_rate 3e-6 \ --weight_decay 0. \ --warmup_ratio 0.03 \ --lr_scheduler_type "cosine" \ --logging_steps 1 \ --deepspeed zero3_offload_config.json \ --tf32 True \ --gradient_checkpointing True \ --lazy_preprocess True

taishan1994 commented 5 months ago

你试一下zero3_offload_config.json里面不要将模型和优化器进行卸载。

super-buster commented 5 months ago

你试一下zero3_offload_config.json里面不要将模型和优化器进行卸载。 刚试了一下可以正常训练了!!! 不过这么做是不是就相当于zero1了? 然后为什么会出现这样的问题呀

taishan1994 commented 5 months ago

就是有些参数卸载到cpu了,然后训练的框架里面没有考虑到。

super-buster commented 5 months ago

就是有些参数卸载到cpu了,然后训练的框架里面没有考虑到。

嗯嗯,我刚确认了只有deepspeed zero3 offload会出现这个问题,zero2,zero2_offload可以正常训练。 因为我之前这些配置都训过,包括不同的模型。最近换了服务器重配了一下环境,怀疑是deepspeed的问题,找到了这个issue,有人提了如果deepspeed版本大于0.14.0会有这个问题。 https://github.com/microsoft/DeepSpeed/issues/5538