Open L2ne opened 5 years ago
I'm training from a different repo, but trying using "--gpu_fraction 0.85" in the command line for the training. Sometimes it tries to to 100% of your GPU mem and it chokes.
@L2ne For gtx1060, try changing outputs_per_step to 2 or 3, or even 4, and reducing max_mel_frames. It will reduce sound quality a bit, but it will help with memory.
I'm also having this problem, with the same GPU (GTX 1060 with 6Gb or RAM). How much RAM and GPU RAM should we have to train this without having to decrease any parameter?
@Rayhane-mamah when you worked on this (by the way, thanks a lot for this amazing work) what setup (hardware + hparams) did you use to reach maximal quality?
I met similar problem, I can start training tacotron with bathcsize 32 for a few steps, then suddenly got OOM. Even reduce to 16, it only works for some more steps, then OOM.
I already set sess.graph.finalize()
before training loop, no error occurred, but the OOM after several steps. Have no idea about this.
Im trying to run this on a gtx1060 with 6gb ram. I already reduced the batch size to 16 and only run tacotron.
maybe a stupid question but could i run this on amd cards. Ihave these vegas laying around gathering dust.