fatchord / WaveRNN

WaveRNN Vocoder + TTS
https://fatchord.github.io/model_outputs/
MIT License
2.14k stars 698 forks source link

Pre-trained models. #209

Open anjanakethineni opened 4 years ago

anjanakethineni commented 4 years ago

I am trying to get pre-trained models for other data sets. Can somebody tell me the exact steps to do pre-training?

fatchord commented 4 years ago

Hi there, the steps to run training are in the main readme file. I'm not sure what the problem is here?

whyxzh commented 3 years ago

@fatchord Hi there, could you kindly share your hparams.py to reproduce your pretrained models ljspeech.tacotron.r2.180k.zip and ljspeech.wavernn.mol.800k.zip. And any suggestion on bigger batch size to speed up training? Thanks.

Tacotron: The default configuration hparams.py results in very blur attention plots after 350k steps.

default @ 350k 1_griffinlim_351k

Compared to pretrained @ 180k 1_griffinlim_180k