:stuck_out_tongue_closed_eyes: TensorFlowTTS: Real-Time State-of-the-art Speech Synthesis for Tensorflow 2 (supported including English, French, Korean, Chinese, German and Easy to adapt for other languages)
hi
thank you for the valuable repo.
I trained fastspeech2 on multi-speaker datasets. When I add a new speaker, the Mel-spectra is very inaccurate. Now, I want to compute the loss to determine which sample(s) cases this issue.
I see an attribute named "compute_per_example_losses" in "train_fastspeech2.py ".
How can I use this function, could you please explain?
What is the batch parameter?
do you have an example?
Best regards.
@dathudeptrai
@ZDisket
hi thank you for the valuable repo. I trained fastspeech2 on multi-speaker datasets. When I add a new speaker, the Mel-spectra is very inaccurate. Now, I want to compute the loss to determine which sample(s) cases this issue. I see an attribute named "compute_per_example_losses" in "train_fastspeech2.py ". How can I use this function, could you please explain? What is the batch parameter? do you have an example? Best regards. @dathudeptrai @ZDisket