RVC-Boss / GPT-SoVITS

1 min voice data can also be used to train a good TTS model! (few shot voice cloning)
MIT License
36.26k stars 4.14k forks source link

训练gpt时爆显存,batch_size设置无效 #1630

Open Saiditall opened 2 months ago

Saiditall commented 2 months ago

训练gpt时无论显存多少都会爆,不管batch_size设置为多少都是。 可以开始训练,中途时会报错。 音频未切分,最长38s。

raceback (most recent call last):
  File "C:\GPT-SoVITS-v2-240821\GPT_SoVITS\s1_train.py", line 183, in <module>
    main(args)
  File "C:\GPT-SoVITS-v2-240821\GPT_SoVITS\s1_train.py", line 159, in main
    trainer.fit(model, data_module, ckpt_path=ckpt_path)
  File "C:\GPT-SoVITS-v2-240821\runtime\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 544, in fit
    call._call_and_handle_interrupt(
  File "C:\GPT-SoVITS-v2-240821\runtime\lib\site-packages\pytorch_lightning\trainer\call.py", line 43, in _call_and_handle_interrupt
    return trainer.strategy.launcher.launch(trainer_fn, *args, trainer=trainer, **kwargs)
  File "C:\GPT-SoVITS-v2-240821\runtime\lib\site-packages\pytorch_lightning\strategies\launchers\subprocess_script.py", line 102, in launch
    return function(*args, **kwargs)
  File "C:\GPT-SoVITS-v2-240821\runtime\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 580, in _fit_impl
    self._run(model, ckpt_path=ckpt_path)
  File "C:\GPT-SoVITS-v2-240821\runtime\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 989, in _run
    results = self._run_stage()
  File "C:\GPT-SoVITS-v2-240821\runtime\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 1035, in _run_stage
    self.fit_loop.run()
  File "C:\GPT-SoVITS-v2-240821\runtime\lib\site-packages\pytorch_lightning\loops\fit_loop.py", line 202, in run
    self.advance()
  File "C:\GPT-SoVITS-v2-240821\runtime\lib\site-packages\pytorch_lightning\loops\fit_loop.py", line 359, in advance
    self.epoch_loop.run(self._data_fetcher)
  File "C:\GPT-SoVITS-v2-240821\runtime\lib\site-packages\pytorch_lightning\loops\training_epoch_loop.py", line 136, in run
    self.advance(data_fetcher)
  File "C:\GPT-SoVITS-v2-240821\runtime\lib\site-packages\pytorch_lightning\loops\training_epoch_loop.py", line 242, in advance
    batch_output = self.manual_optimization.run(kwargs)
  File "C:\GPT-SoVITS-v2-240821\runtime\lib\site-packages\pytorch_lightning\loops\optimization\manual.py", line 92, in run
    self.advance(kwargs)
  File "C:\GPT-SoVITS-v2-240821\runtime\lib\site-packages\pytorch_lightning\loops\optimization\manual.py", line 112, in advance
    training_step_output = call._call_strategy_hook(trainer, "training_step", *kwargs.values())
  File "C:\GPT-SoVITS-v2-240821\runtime\lib\site-packages\pytorch_lightning\trainer\call.py", line 309, in _call_strategy_hook
    output = fn(*args, **kwargs)
  File "C:\GPT-SoVITS-v2-240821\runtime\lib\site-packages\pytorch_lightning\strategies\strategy.py", line 381, in training_step
    return self._forward_redirection(self.model, self.lightning_module, "training_step", *args, **kwargs)
  File "C:\GPT-SoVITS-v2-240821\runtime\lib\site-packages\pytorch_lightning\strategies\strategy.py", line 633, in __call__
    wrapper_output = wrapper_module(*args, **kwargs)
  File "C:\GPT-SoVITS-v2-240821\runtime\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\GPT-SoVITS-v2-240821\runtime\lib\site-packages\torch\nn\parallel\distributed.py", line 1156, in forward
    output = self._run_ddp_forward(*inputs, **kwargs)
  File "C:\GPT-SoVITS-v2-240821\runtime\lib\site-packages\torch\nn\parallel\distributed.py", line 1110, in _run_ddp_forward
    return module_to_run(*inputs[0], **kwargs[0])  # type: ignore[index]
  File "C:\GPT-SoVITS-v2-240821\runtime\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\GPT-SoVITS-v2-240821\runtime\lib\site-packages\pytorch_lightning\strategies\strategy.py", line 626, in wrapped_forward
    out = method(*_args, **_kwargs)
  File "C:\GPT-SoVITS-v2-240821\GPT_SoVITS\AR\models\t2s_lightning_module.py", line 39, in training_step
    loss, acc = forward(
  File "C:\GPT-SoVITS-v2-240821\GPT_SoVITS\AR\models\t2s_model.py", line 417, in forward
    reject_xy_dec, _ = self.h(
  File "C:\GPT-SoVITS-v2-240821\runtime\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\GPT-SoVITS-v2-240821\GPT_SoVITS\AR\modules\transformer.py", line 170, in forward
    output = mod(
  File "C:\GPT-SoVITS-v2-240821\runtime\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\GPT-SoVITS-v2-240821\GPT_SoVITS\AR\modules\transformer.py", line 311, in forward
    x + self._sa_block(x, src_mask, src_key_padding_mask, cache=cache),
  File "C:\GPT-SoVITS-v2-240821\GPT_SoVITS\AR\modules\transformer.py", line 332, in _sa_block
    x = self.self_attn(
  File "C:\GPT-SoVITS-v2-240821\runtime\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\GPT-SoVITS-v2-240821\GPT_SoVITS\AR\modules\activation.py", line 404, in forward
    attn_output, attn_output_weights = F.multi_head_attention_forward(
  File "C:\GPT-SoVITS-v2-240821\GPT_SoVITS\AR\modules\patched_mha_with_cache.py", line 452, in multi_head_attention_forward_patched
    attn_output = scaled_dot_product_attention(
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 250.00 MiB (GPU 0; 22.50 GiB total capacity; 21.04 GiB already allocated; 0 bytes free; 21.33 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF``
Saiditall commented 2 months ago

对了,勾选了dpo训练,显存24g和16g都试过了。目测应该多少显存都会爆

HsiangLeekwok commented 2 months ago

把超过10s的音频再切一次,最好到10s以下

Saiditall commented 2 months ago

把超过10s的音频再切一次,最好到10s以下 请问加大显存能解决吗,不懂日语,切分后无法校对

HsiangLeekwok commented 2 months ago

把超过10s的音频再切一次,最好到10s以下

请问加大显存能解决吗,不懂日语,切分后无法校对

Sorry,我也不懂日语。 但GPT这个项目是要求所有音频小于10s以下的。 目前不需要切分音频或者音频可以支持到30s以上直接进训练的,我了解阿里的CosyVoice是可以支持这么长的音频。

XXXXRT666 commented 2 months ago

无切分要求,最长支持近一分钟音频,只是你的显存不够

floresguangxi commented 2 months ago

一分钟得多大显存呀?我的16g显存最高12s再多就不敢切了

Saiditall commented 1 month ago

在使用更大显存后解决,但是当batch_size为1时显存占用小于16g,不清楚为什么16g和24g显存会爆 超过10s后大概每多10s要多占用12-16g显存

一分钟得多大显存呀?我的16g显存最高12s再多就不敢切了

一分钟的话按照这个规律估计要64g左右的显存