chrisdonahue / wavegan

WaveGAN: Learn to synthesize raw audio with generative adversarial networks
MIT License
1.32k stars 282 forks source link

Question regarding training efficiency #81

Open xyz010 opened 4 years ago

xyz010 commented 4 years ago

Hello, I am training on 192 wav files of around 7 seconds duration on a Tesla V100 GPU. While training, I monitor the GPU Utilization with the nvidia-smi and the results that I get is that i highly fluctuates from 0% up until 60%. This is probably an indicator that there is a bottleneck somewhere and that the GPU isn't fed data fast enough. Has anyone else come upon this issue? Any ideas on how to find what the bottleneck is?

vishal1o8 commented 4 years ago

Hey xyz010,

I'm facing a similar issue. Were you able to solve it?

markhanslip commented 3 years ago

Hi,

I experienced something similar when training with phaseshuffle - my guess is that with some setups, phaseshuffle uses CPU and so there's a lot of back and forth between GPU and CPU during training. I just turned it off (--phaseshuffle 0), it makes training so much faster although generated samples are a bit noisier. Is worth the trade-off imo.