FunAudioLLM / CosyVoice

Multi-lingual large voice generation model, providing inference, training and deployment full-stack ability.
https://funaudiollm.github.io/
Apache License 2.0
4.53k stars 455 forks source link

deepspeed Zero3 训练 模型保存报错 #177

Open qxde01 opened 1 month ago

qxde01 commented 1 month ago

Describe the bug 使用deepspeed zero3微调时,只需11G显存,我用的是4X1080Ti,CUDA 12.1 ,torch 2.0.1 但是每当epoch结束保存模型时,总会出现下面的错误:

 File "/usr/local/lib/python3.10/dist-packages/deepspeed/runtime/engine.py", line 1855, in forward
      loss = self.module(inputs, kwargs)
    File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1538, in callimpl
      result = forward_call(*args, kwargs)
    File " /home/gpu/CosyVoice/cosyvoice/llm/llm.py", line 108, in forward
      text_token, text_token_len = self.encode(text_token, text_token_len)
    File " /home/gpu/CosyVoice/cosyvoice/llm/llm.py", line 71, in encode
      encoder_out, encoder_mask = self.text_encoder(text, text_lengths, decoding_chunk_size=1, num_decoding_left_chunks=-1)
    File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1538, in callimpl
      result = forward_call(*args, kwargs)
    File " /home/gpu/CosyVoice/cosyvoice/transformer/encoder.py", line 145, in forward
      xs, pos_emb, masks = self.embed(xs, masks)
    File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1538, in callimpl
      result = forward_call(*args, kwargs)
    File " /home/gpu/CosyVoice/cosyvoice/transformer/subsampling.py", line 111, in forward
      x = self.out(x)
    File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1538, in callimpl
      result = forward_call(*args, kwargs)
    File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/container.py", line 217, in forward
      input = module(input)
    File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1538, in callimpl
      result = forward_call(*args, kwargs)
    File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/normalization.py", line 190, in forward
      return F.layer_norm(
    File "/usr/local/lib/python3.10/dist-packages/torch/nn/functional.py", line 2515, in layer_norm
      return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
  RuntimeError: Inference tensors cannot be saved for backward. To work around you can make a clone to get a normal tensor and use it in autograd.

zero3配置

{
  "train_micro_batch_size_per_gpu": 1,
  "gradient_accumulation_steps": 1,
  "steps_per_print": 10,
  "gradient_clipping": 5,
  "wall_clock_breakdown":false,
    "bfloat16":{
        "enabled":false
    },
    "fp16":{
        "enabled":false
    },
    "zero_optimization":{
        "stage":3,
        "offload_optimizer":{
            "device":"cpu",
            "pin_memory":true,
            "ratio": 1.0
        },
        "offload_param":{
            "device":"cpu",
            "pin_memory":true
        },
        "overlap_comm":true,
        "contiguous_gradients":true,
        "sub_group_size":1000000000,
        "reduce_bucket_size":"auto",
        "stage3_prefetch_bucket_size":"auto",
        "stage3_param_persistence_threshold":"auto",
        "stage3_max_live_parameters":1000000000,
        "stage3_max_reuse_distance":1000000000,
        "gather_16bit_weights_on_model_save":false,
        "elastic_checkpoint" :true
    },
    "optimizer": {
    "type": "AdamW",
    "params": {
        "lr": 0.001,
        "weight_decay": 0.0001,
        "torch_adam": true,
        "adam_w_mode": true
    }
  },
  "comms_logger": {
      "enabled": true,
      "verbose": true,
      "prof_all": true,
      "debug": false
     },
  "timer": {
  "barrier_timeout": 1800
  }
}

请问如何解决这个问题,谢谢。

aluminumbox commented 1 month ago

well I haven't tried zero3, check https://stackoverflow.com/questions/75517324/runtimeerror-inference-tensors-cannot-be-saved-for-backward-to-work-around-you, try change executor.py @torch.inference_mode to @torch.no_grad. Please tell me if it works

qxde01 commented 1 month ago

谢谢,我这里不起作用,我修改的地方是: llm.py

# @torch.inference_mode()
def inference(    ...    ) -> torch.Tensor:
        with torch.no_grad():
            device = text.device
            ......

仍然报错:


[2024-07-22 11:35:40,269] [INFO] [torch_checkpoint_engine.py:33:commit] [Torch] Checkpoint epoch_0_whole is ready now!
[2024-07-22 11:35:40,269] [INFO] [torch_checkpoint_engine.py:33:commit] [Torch] Checkpoint epoch_0_whole is ready now!
Traceback (most recent call last):
  File "/home/gpu/CosyVoice/examples/libritts/cosyvoice/cosyvoice/bin/train.py", line 136, in <module>
    main()
  File "/usr/local/lib/python3.10/dist-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper
    return f(*args, **kwargs)
  File "/home/gpu/CosyVoice/examples/libritts/cosyvoice/cosyvoice/bin/train.py", line 132, in main
    executor.train_one_epoc(model, optimizer, scheduler, train_data_loader, cv_data_loader, writer, info_dict, group_join)
  File "/home/gpu/CosyVoice/cosyvoice/utils/executor.py", line 67, in train_one_epoc
    info_dict = batch_forward(model, batch_dict, info_dict)
  File "/home/gpu/CosyVoice/cosyvoice/utils/train_utils.py", line 212, in batch_forward
    info_dict['loss_dict'] = model(batch, device)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn
    ret_val = func(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/deepspeed/runtime/engine.py", line 1846, in forward
    loss = self.module(*inputs, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1538, in _call_impl
    result = forward_call(*args, **kwargs)
  File "/home/gpu/CosyVoice/cosyvoice/llm/llm.py", line 108, in forward
    text_token, text_token_len = self.encode(text_token, text_token_len)
  File "/home/gpu/CosyVoice/cosyvoice/llm/llm.py", line 71, in encode
    encoder_out, encoder_mask = self.text_encoder(text, text_lengths, decoding_chunk_size=1, num_decoding_left_chunks=-1)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1538, in _call_impl
    result = forward_call(*args, **kwargs)
  File "/home/gpu/CosyVoice/cosyvoice/transformer/encoder.py", line 145, in forward
    xs, pos_emb, masks = self.embed(xs, masks)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1538, in _call_impl
    result = forward_call(*args, **kwargs)
  File "/home/gpu/CosyVoice/cosyvoice/transformer/subsampling.py", line 111, in forward
    x = self.out(x)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1538, in _call_impl
    result = forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/container.py", line 217, in forward
    input = module(input)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1538, in _call_impl
    result = forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/normalization.py", line 190, in forward
    return F.layer_norm(
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/functional.py", line 2515, in layer_norm
    return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: Inference tensors cannot be saved for backward. To work around you can make a clone to get a normal tensor and use it in autograd.
Traceback (most recent call last):
  File "/home/gpu/CosyVoice/examples/libritts/cosyvoice/cosyvoice/bin/train.py", line 136, in <module>
    main()
  File "/usr/local/lib/python3.10/dist-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper
    return f(*args, **kwargs)
  File "/home/gpu/CosyVoice/examples/libritts/cosyvoice/cosyvoice/bin/train.py", line 132, in main
    executor.train_one_epoc(model, optimizer, scheduler, train_data_loader, cv_data_loader, writer, info_dict, group_join)
  File "/home/gpu/CosyVoice/cosyvoice/utils/executor.py", line 67, in train_one_epoc
    info_dict = batch_forward(model, batch_dict, info_dict)
  File "/home/gpu/CosyVoice/cosyvoice/utils/train_utils.py", line 212, in batch_forward
    info_dict['loss_dict'] = model(batch, device)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn
    ret_val = func(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/deepspeed/runtime/engine.py", line 1846, in forward
    loss = self.module(*inputs, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1538, in _call_impl
    result = forward_call(*args, **kwargs)
  File "/home/gpu/CosyVoice/cosyvoice/llm/llm.py", line 108, in forward
    text_token, text_token_len = self.encode(text_token, text_token_len)
  File "/home/gpu/CosyVoice/cosyvoice/llm/llm.py", line 71, in encode
    encoder_out, encoder_mask = self.text_encoder(text, text_lengths, decoding_chunk_size=1, num_decoding_left_chunks=-1)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1538, in _call_impl
    result = forward_call(*args, **kwargs)
  File "/home/gpu/CosyVoice/cosyvoice/transformer/encoder.py", line 145, in forward
    xs, pos_emb, masks = self.embed(xs, masks)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1538, in _call_impl
    result = forward_call(*args, **kwargs)
  File "/home/gpu/CosyVoice/cosyvoice/transformer/subsampling.py", line 111, in forward
    x = self.out(x)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1538, in _call_impl
    result = forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/container.py", line 217, in forward
    input = module(input)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1538, in _call_impl
    result = forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/normalization.py", line 190, in forward
    return F.layer_norm(
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/functional.py", line 2515, in layer_norm
    return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: Inference tensors cannot be saved for backward. To work around you can make a clone to get a normal tensor and use it in autograd.

我的训练参数是:

pretrained_model_dir=../../../pretrained_models/CosyVoice-300M-Instruct
export CUDA_VISIBLE_DEVICES="0,1,2,3"
job_id=1986
#nccl
dist_backend=nccl
num_workers=2
prefetch=100
train_engine=deepspeed
#torch_ddp
torchrun --nnodes=1 --nproc_per_node=4  --master_port=9901  cosyvoice/bin/train.py \
      --train_engine $train_engine \
      --config conf/cosyvoice.yaml \
      --train_data data/train2.data.list \
      --cv_data data/dev.data.list \
      --model llm \
      --checkpoint $pretrained_model_dir/llm.pt \
      --model_dir `pwd`/exp/llm/$train_engine \
      --tensorboard_dir `pwd`/tensorboard/llm/$train_engine \
      --ddp.dist_backend $dist_backend \
      --num_workers ${num_workers} \
      --prefetch ${prefetch} \
      --pin_memory \
      --timeout 600 \
      --deepspeed_config ./conf/ds_stage3.json \
      --deepspeed.save_states model_only
aluminumbox commented 1 month ago

谢谢,我这里不起作用,我修改的地方是: llm.py

# @torch.inference_mode()
def inference(    ...    ) -> torch.Tensor:
        with torch.no_grad():
            device = text.device
            ......

仍然报错:


[2024-07-22 11:35:40,269] [INFO] [torch_checkpoint_engine.py:33:commit] [Torch] Checkpoint epoch_0_whole is ready now!
[2024-07-22 11:35:40,269] [INFO] [torch_checkpoint_engine.py:33:commit] [Torch] Checkpoint epoch_0_whole is ready now!
Traceback (most recent call last):
  File "/home/gpu/CosyVoice/examples/libritts/cosyvoice/cosyvoice/bin/train.py", line 136, in <module>
    main()
  File "/usr/local/lib/python3.10/dist-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper
    return f(*args, **kwargs)
  File "/home/gpu/CosyVoice/examples/libritts/cosyvoice/cosyvoice/bin/train.py", line 132, in main
    executor.train_one_epoc(model, optimizer, scheduler, train_data_loader, cv_data_loader, writer, info_dict, group_join)
  File "/home/gpu/CosyVoice/cosyvoice/utils/executor.py", line 67, in train_one_epoc
    info_dict = batch_forward(model, batch_dict, info_dict)
  File "/home/gpu/CosyVoice/cosyvoice/utils/train_utils.py", line 212, in batch_forward
    info_dict['loss_dict'] = model(batch, device)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn
    ret_val = func(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/deepspeed/runtime/engine.py", line 1846, in forward
    loss = self.module(*inputs, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1538, in _call_impl
    result = forward_call(*args, **kwargs)
  File "/home/gpu/CosyVoice/cosyvoice/llm/llm.py", line 108, in forward
    text_token, text_token_len = self.encode(text_token, text_token_len)
  File "/home/gpu/CosyVoice/cosyvoice/llm/llm.py", line 71, in encode
    encoder_out, encoder_mask = self.text_encoder(text, text_lengths, decoding_chunk_size=1, num_decoding_left_chunks=-1)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1538, in _call_impl
    result = forward_call(*args, **kwargs)
  File "/home/gpu/CosyVoice/cosyvoice/transformer/encoder.py", line 145, in forward
    xs, pos_emb, masks = self.embed(xs, masks)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1538, in _call_impl
    result = forward_call(*args, **kwargs)
  File "/home/gpu/CosyVoice/cosyvoice/transformer/subsampling.py", line 111, in forward
    x = self.out(x)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1538, in _call_impl
    result = forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/container.py", line 217, in forward
    input = module(input)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1538, in _call_impl
    result = forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/normalization.py", line 190, in forward
    return F.layer_norm(
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/functional.py", line 2515, in layer_norm
    return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: Inference tensors cannot be saved for backward. To work around you can make a clone to get a normal tensor and use it in autograd.
Traceback (most recent call last):
  File "/home/gpu/CosyVoice/examples/libritts/cosyvoice/cosyvoice/bin/train.py", line 136, in <module>
    main()
  File "/usr/local/lib/python3.10/dist-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper
    return f(*args, **kwargs)
  File "/home/gpu/CosyVoice/examples/libritts/cosyvoice/cosyvoice/bin/train.py", line 132, in main
    executor.train_one_epoc(model, optimizer, scheduler, train_data_loader, cv_data_loader, writer, info_dict, group_join)
  File "/home/gpu/CosyVoice/cosyvoice/utils/executor.py", line 67, in train_one_epoc
    info_dict = batch_forward(model, batch_dict, info_dict)
  File "/home/gpu/CosyVoice/cosyvoice/utils/train_utils.py", line 212, in batch_forward
    info_dict['loss_dict'] = model(batch, device)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn
    ret_val = func(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/deepspeed/runtime/engine.py", line 1846, in forward
    loss = self.module(*inputs, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1538, in _call_impl
    result = forward_call(*args, **kwargs)
  File "/home/gpu/CosyVoice/cosyvoice/llm/llm.py", line 108, in forward
    text_token, text_token_len = self.encode(text_token, text_token_len)
  File "/home/gpu/CosyVoice/cosyvoice/llm/llm.py", line 71, in encode
    encoder_out, encoder_mask = self.text_encoder(text, text_lengths, decoding_chunk_size=1, num_decoding_left_chunks=-1)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1538, in _call_impl
    result = forward_call(*args, **kwargs)
  File "/home/gpu/CosyVoice/cosyvoice/transformer/encoder.py", line 145, in forward
    xs, pos_emb, masks = self.embed(xs, masks)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1538, in _call_impl
    result = forward_call(*args, **kwargs)
  File "/home/gpu/CosyVoice/cosyvoice/transformer/subsampling.py", line 111, in forward
    x = self.out(x)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1538, in _call_impl
    result = forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/container.py", line 217, in forward
    input = module(input)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1538, in _call_impl
    result = forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/normalization.py", line 190, in forward
    return F.layer_norm(
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/functional.py", line 2515, in layer_norm
    return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: Inference tensors cannot be saved for backward. To work around you can make a clone to get a normal tensor and use it in autograd.

我的训练参数是:

pretrained_model_dir=../../../pretrained_models/CosyVoice-300M-Instruct
export CUDA_VISIBLE_DEVICES="0,1,2,3"
job_id=1986
#nccl
dist_backend=nccl
num_workers=2
prefetch=100
train_engine=deepspeed
#torch_ddp
torchrun --nnodes=1 --nproc_per_node=4  --master_port=9901  cosyvoice/bin/train.py \
      --train_engine $train_engine \
      --config conf/cosyvoice.yaml \
      --train_data data/train2.data.list \
      --cv_data data/dev.data.list \
      --model llm \
      --checkpoint $pretrained_model_dir/llm.pt \
      --model_dir `pwd`/exp/llm/$train_engine \
      --tensorboard_dir `pwd`/tensorboard/llm/$train_engine \
      --ddp.dist_backend $dist_backend \
      --num_workers ${num_workers} \
      --prefetch ${prefetch} \
      --pin_memory \
      --timeout 600 \
      --deepspeed_config ./conf/ds_stage3.json \
      --deepspeed.save_states model_only

not llm.py. try change https://github.com/FunAudioLLM/CosyVoice/blob/main/cosyvoice/utils/executor.py#L82 to torch.no_grad()

qxde01 commented 1 month ago

Thank you. It‘s works after changed executor.py。