Kyubyong / tacotron

A TensorFlow Implementation of Tacotron: A Fully End-to-End Text-To-Speech Synthesis Model
Apache License 2.0
1.83k stars 436 forks source link

mean_loss1 & mean_loss2 meaning #93

Open eazhary opened 7 years ago

eazhary commented 7 years ago

I am training on Arabic language, I changed the character set in prepro.py, I cleaned the input text. I create the csv file with the proper format. I trained on 1 sample (sanity_check) and it works perfectly.

Now training on a bigger dataset (6300 sentences), I get mean_loss1 very high (~150-~100), and mean_loss2 (>1).

As far as I understand the mean_loss1 is the error rate in decoder1 (generated spectrogram vs groud truth), and mean_loss2 is the error rate in decoder2 (generated magnitude vs ground truth magnitude).

How come mean_loss2 is so much smaller than mean_loss1 ?

Why do I have such a high value of loss even after 24K steps.

I am using default parameters with batch size = 16

capture