I'm fine-tuning the released checkpoint (~190M) on a new language for plain_tts task. The initial results so far look good as the voice sounds natural, but still not satisfying. I would greatly appreciate your assistance in suggesting hyperparameters for finetuning the model or providing guidance on aspects like dataset size in finetuning. Additionally, any insights you may have noticed during the fine-tuning process would be invaluable.
I'm training the model for longer epochs to improve the results (more than 15 epochs). However, the generated audio shows anomalies, such as repetitive words or missing phonemes, despite the correct script input. I suspect this may be attributed to overfitting. To address this, I set the dropout to 0.7, but the issue persists. Also, I have noticed a recurring warning message during the inference stage in later epochs (beyond 15), with its frequency increasing alongside the number of epochs.
I'm fine-tuning the released checkpoint (~190M) on a new language for plain_tts task. The initial results so far look good as the voice sounds natural, but still not satisfying. I would greatly appreciate your assistance in suggesting hyperparameters for finetuning the model or providing guidance on aspects like dataset size in finetuning. Additionally, any insights you may have noticed during the fine-tuning process would be invaluable.
I'm training the model for longer epochs to improve the results (more than 15 epochs). However, the generated audio shows anomalies, such as repetitive words or missing phonemes, despite the correct script input. I suspect this may be attributed to overfitting. To address this, I set the dropout to 0.7, but the issue persists. Also, I have noticed a recurring warning message during the inference stage in later epochs (beyond 15), with its frequency increasing alongside the number of epochs.