MoonInTheRiver / DiffSinger

DiffSinger: Singing Voice Synthesis via Shallow Diffusion Mechanism (SVS & TTS); AAAI 2022; Official code
MIT License
4.26k stars 713 forks source link

Multi-GPU training & batchsize problem #102

Open X-Drunker opened 10 months ago

X-Drunker commented 10 months ago

Hi, I really appreciate your work and now I'm going to train the model on this pipeline. My issues are as follows:

  1. I note that you have adapted the code to multi-GPU versions with DDP, but I cant figure out how to train with multi-GPU. Maybe I should set self.use_ddp = True here ?
  2. In the paper you mentioned that you trained DiffSinger on 1 NVIDIA V100 GPU with 48 batch size. However, I can't find any customizable variable related to batch size. Is it necessary to set batch size to match the number of GPU, if I want to train with multi-GPU? Any suggestion is welcome.