Closed Tomiinek closed 1 year ago
Nice catch, that's concerning. Thanks for reporting it. We'll look more into it but it looks like you are right.
This torch.no_grad
doesn't affect the speaker encoder during training because there is no parameter before it. But in our case, using the original implementation for SCL, the TTS parameters are before this torch.no_grad
. I guess in the paper the slightly better SECS score are explained by the extra training steps...
Nice catch. Indeed it is an issue. I will submit a PR to fix it.
Thank you @Edresson
Are you also planning to retrain the models and update the YourTTS paper at least on arxiv? :innocent:
Thank you @Edresson
Are you also planning to retrain the models and update the YourTTS paper at least on arxiv? innocent
I think it is not worth because if we do it we will need to recompute the MOS and Sim-MOS. I'm thinking about update the preprint, removing Speaker Consistency Loss from the methodology. And given that the Speaker Consistency Loss had no effect on the results, Speaker Consistency Loss experiments are equal than keep the model training per more 50k steps. In addition, I will try to retracting this issue on ICML published paper as well. Fortunately, It is a minor issue and the reported results are not effected (only the method description that is wrong).
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. You might also look our discussion channels.
@Tomiinek Thanks so much for finding the bug and reporting it. I talked with all authors and the final decision was added a Erratum on YourTTS Github repository and on the last page of the preprint. It is done :).
Describe the bug
Hello guys (CC: @Edresson @WeberJulian), when going through YourTTS code & paper, I noticed that you are calculating the inputs for the speaker encoder with no grads: https://github.com/coqui-ai/TTS/blob/d46fbc240ccf21797d42ac26cb27eb0b9f8d31c4/TTS/encoder/models/resnet.py#L153-L200
I suspect that the speaker encoder is not producing any gradients, and the speaker consistency loss has no effect. It looks like this happens:
torch_spec
with no gradsloss.backward()
works as usually, but the speaker encoder does not contribute to the gradients flowing to the generator at allCould you please check on that?
To Reproduce
Expected behavior
No response
Logs
No response
Environment
Additional context
No response