jaywalnut310 / glow-tts

A Generative Flow for Text-to-Speech via Monotonic Alignment Search
MIT License
667 stars 150 forks source link

Models for finetuning #61

Open sygi opened 3 years ago

sygi commented 3 years ago

You publish a single checkpoint file (pretrained/pretrainedblank.pth), but from the code it seems that there are two checkpoints required for training: G{epoch}_.pth and ddi_G.pth.

Could you clarify whether the models that you published can be used for fine-tuning, or only inference & training from scratch are supported?