Closed arieszhang1994 closed 10 months ago
Thanks for your attention! Most tasks of Amphion have supported the multi-GPU training based on Accelerate. Which model do you want to train? For example, if training Vall-E, you just need to specify --gpu
when running the run.sh
(such as --gpu "0,1,2,3"
).
Thank you so much!
https://github.com/open-mmlab/Amphion/blob/main/egs/vocoder/gan/tfr_enhanced_hifigan/README.md
this cannot use multi gpu
https://github.com/open-mmlab/Amphion/blob/main/egs/vocoder/gan/tfr_enhanced_hifigan/README.md
this cannot use multi gpu
Hi @hscspring , all trainings in this repo supports multi-gpu using huggingface accelerate. Kindly check line 90 in the script you mentioned "CUDA_VISIBLE_DEVICES=$gpu accelerate launch", make sure the CUDA_VISIBLE_DEVICES has more than one value.
@jiaqili3 thanks for your reply~ i used this cmd:
bash egs/vocoder/gan/tfr_enhanced_hifigan/run.sh --stage 2 --name xtts2 --gpu "0,2"
and get an issue:
RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by passing the keyword argument `find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`, and by
making sure all `forward` function outputs participate in calculating loss.
If you already have done the above, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's `forward` function. Please include the loss function and the structure of the return value of `forward` of your module when reporting this issue (e.g. list, dict, iterable).
Parameter indices which did not receive grad for rank 1: 291 292 293 294 295 296 297 298 299 300 301 302
In addition, you can set the environment variable TORCH_DISTRIBUTED_DEBUG to either INFO or DETAIL to print out information about which particular parameters did not receive gradient on this rank as part of this error
Training Epoch 0: 0%| | 1/8140 [00:39<89:48:57, 39.73s/batch]Traceback (most recent call last):
File "Amphion/bins/vocoder/train.py", line 98, in <module>
main()
File "Amphion/bins/vocoder/train.py", line 94, in main
trainer.train_loop()
File "Amphion/models/vocoders/gan/gan_vocoder_trainer.py", line 597, in train_loop
train_total_loss, train_losses = self._train_epoch()
File "Amphion/models/vocoders/gan/gan_vocoder_trainer.py", line 725, in _train_epoch
total_loss, losses = self._train_step(batch)
File "Amphion/models/vocoders/gan/gan_vocoder_trainer.py", line 824, in _train_step
audio_pred = self.generator.forward(mel_input)
File "amphion/lib/python3.9/site-packages/torch/nn/parallel/distributed.py", line 1519, in forward
inputs, kwargs = self._pre_forward(*inputs, **kwargs)
File "amphion/lib/python3.9/site-packages/torch/nn/parallel/distributed.py", line 1413, in _pre_forward
if torch.is_grad_enabled() and self.reducer._rebuild_buckets():
RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by passing the keyword argument `find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`, and by
making sure all `forward` function outputs participate in calculating loss.
If you already have done the above, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's `forward` function. Please include the loss function and the structure of the return value of `forward` of your module when reporting this issue (e.g. list, dict, iterable).
Parameter indices which did not receive grad for rank 0: 291 292 293 294 295 296 297 298 299 300 301 302
In addition, you can set the environment variable TORCH_DISTRIBUTED_DEBUG to either INFO or DETAIL to print out information about which particular parameters did not receive gradient on this rank as part of this error
@jiaqili3 thanks for your reply~ i used this cmd:
bash egs/vocoder/gan/tfr_enhanced_hifigan/run.sh --stage 2 --name xtts2 --gpu "0,2"
and get an issue:
RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by passing the keyword argument `find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`, and by making sure all `forward` function outputs participate in calculating loss. If you already have done the above, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's `forward` function. Please include the loss function and the structure of the return value of `forward` of your module when reporting this issue (e.g. list, dict, iterable). Parameter indices which did not receive grad for rank 1: 291 292 293 294 295 296 297 298 299 300 301 302 In addition, you can set the environment variable TORCH_DISTRIBUTED_DEBUG to either INFO or DETAIL to print out information about which particular parameters did not receive gradient on this rank as part of this error Training Epoch 0: 0%| | 1/8140 [00:39<89:48:57, 39.73s/batch]Traceback (most recent call last): File "Amphion/bins/vocoder/train.py", line 98, in <module> main() File "Amphion/bins/vocoder/train.py", line 94, in main trainer.train_loop() File "Amphion/models/vocoders/gan/gan_vocoder_trainer.py", line 597, in train_loop train_total_loss, train_losses = self._train_epoch() File "Amphion/models/vocoders/gan/gan_vocoder_trainer.py", line 725, in _train_epoch total_loss, losses = self._train_step(batch) File "Amphion/models/vocoders/gan/gan_vocoder_trainer.py", line 824, in _train_step audio_pred = self.generator.forward(mel_input) File "amphion/lib/python3.9/site-packages/torch/nn/parallel/distributed.py", line 1519, in forward inputs, kwargs = self._pre_forward(*inputs, **kwargs) File "amphion/lib/python3.9/site-packages/torch/nn/parallel/distributed.py", line 1413, in _pre_forward if torch.is_grad_enabled() and self.reducer._rebuild_buckets(): RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by passing the keyword argument `find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`, and by making sure all `forward` function outputs participate in calculating loss. If you already have done the above, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's `forward` function. Please include the loss function and the structure of the return value of `forward` of your module when reporting this issue (e.g. list, dict, iterable). Parameter indices which did not receive grad for rank 0: 291 292 293 294 295 296 297 298 299 300 301 302 In addition, you can set the environment variable TORCH_DISTRIBUTED_DEBUG to either INFO or DETAIL to print out information about which particular parameters did not receive gradient on this rank as part of this error
I had the same error when training vitsvc recipe. Any advice to solve this error?
sh egs/svc/VitsSVC/run.sh --stage 2 --name vitsvc --gpu "1,6"
@jiaqili3 @RMSnow @hscspring
Thank you for the great works! I want to train a model with more data. But I am don't know whether amphion can support nulti-gpu training now. If not, will it be supportted in the future?