Open anjanakethineni opened 4 years ago
Hi there, the steps to run training are in the main readme file. I'm not sure what the problem is here?
@fatchord Hi there, could you kindly share your hparams.py to reproduce your pretrained models ljspeech.tacotron.r2.180k.zip and ljspeech.wavernn.mol.800k.zip. And any suggestion on bigger batch size to speed up training? Thanks.
Tacotron: The default configuration hparams.py results in very blur attention plots after 350k steps.
default @ 350k
Compared to pretrained @ 180k
I am trying to get pre-trained models for other data sets. Can somebody tell me the exact steps to do pre-training?