Open lsq357 opened 7 years ago
I've been trying to use wavenet insteading of griffin-lim recently .
@feiyuhong Finish it? Can you show me some experienment result and loss curve?
@feiyuhong can you share link to some code? I'm interested in porting it to pytorch!
In my result, I can't get the clear voice after a week training when batch_size=32 Also, it is too slow that only train 1.6k+ step each day in my single GTX1080TI GPU, I need more GPUs.
@keithito Do you plan to use Wavenet Vocoder with Tacotron? There is a new repo to do this task https://github.com/r9y9/wavenet_vocoder. Hope to see the good quality sound just like the white paper did.
@toannhu Yes, that repo looks great! I'm training right now on LJ Speech. There's some more discussion over at https://github.com/keithito/tacotron/issues/90
I wonder why Griffin-Lim samples from here https://nv-adlr.github.io/WaveGlow have as good quality as other methods? Is it large number of iterations or it's just good output of tacotron?
Does anyone plan to use wavenet-based vocoder, insteading of griffin-lim algoritm, which greatly increase the audio quality in the single-speaker experiment of Deep Voice 2 and Deep Voice 3. And in Deep Voice 3,both
deep voice 3 + wavenet
andtacotron + wavenet
achieves the highest MOS of 3.78,Furthmore it is claimed thatdeep voice 3 + wavenet
is faster to trian and faster to convege thantacotron+wavenet