coqui-ai / TTS

🐸💬 - a deep learning toolkit for Text-to-Speech, battle-tested in research and production
http://coqui.ai
Mozilla Public License 2.0
35.48k stars 4.33k forks source link

Loss stagnates at 1.0 during training on custom dataset #768

Closed rioharper closed 3 years ago

rioharper commented 3 years ago

I have been finetuning off a model using Glow TTS on google colab. The training quickly dropped from to 1.0 loss and has increased for the past 1400 steps or so. I made sure to remove any noise or silence from my dataset of 300 samples, and used the analyze spectrograms notebook to check the configurations, so I am lost on what I need to do to make the loss drop further. Do I just need to continue to train?

Tensorboard results: image

Configuration: image

Dataset files: https://drive.google.com/drive/folders/1OOIriahYvNPRnK3NMxzH0L1XF_cIGVuM?usp=sharing

Colab notebook: https://colab.research.google.com/drive/1vYMd3FBpbpnFZeJlSZWCINdERAS_VUSr?usp=sharing

rioharper commented 3 years ago

After training for 1000 epochs on a glow tts pre-trained model of 300,000 steps, I actually ran into the same issue. What else can I do?

erogol commented 3 years ago

how does the model sound?

erogol commented 3 years ago

I move it to the discussions as it does not conform to the issue template.