jaywalnut310 / glow-tts

A Generative Flow for Text-to-Speech via Monotonic Alignment Search
MIT License
660 stars 151 forks source link

Models for finetuning #61

Open sygi opened 2 years ago

sygi commented 2 years ago

You publish a single checkpoint file (pretrained/pretrainedblank.pth), but from the code it seems that there are two checkpoints required for training: G{epoch}_.pth and ddi_G.pth.

Could you clarify whether the models that you published can be used for fine-tuning, or only inference & training from scratch are supported?