Rayhane-mamah / Tacotron-2

DeepMind's Tacotron-2 Tensorflow implementation
MIT License
2.27k stars 905 forks source link

Running into OOM, is my GPU too bad ? #358

Open L2ne opened 5 years ago

L2ne commented 5 years ago

Im trying to run this on a gtx1060 with 6gb ram. I already reduced the batch size to 16 and only run tacotron.

maybe a stupid question but could i run this on amd cards. Ihave these vegas laying around gathering dust.

Ittiz commented 5 years ago

I'm training from a different repo, but trying using "--gpu_fraction 0.85" in the command line for the training. Sometimes it tries to to 100% of your GPU mem and it chokes.

geneing commented 5 years ago

@L2ne For gtx1060, try changing outputs_per_step to 2 or 3, or even 4, and reducing max_mel_frames. It will reduce sound quality a bit, but it will help with memory.

levraipixel commented 5 years ago

I'm also having this problem, with the same GPU (GTX 1060 with 6Gb or RAM). How much RAM and GPU RAM should we have to train this without having to decrease any parameter?

levraipixel commented 5 years ago

@Rayhane-mamah when you worked on this (by the way, thanks a lot for this amazing work) what setup (hardware + hparams) did you use to reach maximal quality?

zhangyi02 commented 5 years ago

I met similar problem, I can start training tacotron with bathcsize 32 for a few steps, then suddenly got OOM. Even reduce to 16, it only works for some more steps, then OOM. I already set sess.graph.finalize() before training loop, no error occurred, but the OOM after several steps. Have no idea about this.