coqui-ai / TTS

🐸💬 - a deep learning toolkit for Text-to-Speech, battle-tested in research and production
http://coqui.ai
Mozilla Public License 2.0
35.08k stars 4.27k forks source link

[Bug] No grad in YourTTS speaker encoder #2348

Closed Tomiinek closed 1 year ago

Tomiinek commented 1 year ago

Describe the bug

Hello guys (CC: @Edresson @WeberJulian), when going through YourTTS code & paper, I noticed that you are calculating the inputs for the speaker encoder with no grads: https://github.com/coqui-ai/TTS/blob/d46fbc240ccf21797d42ac26cb27eb0b9f8d31c4/TTS/encoder/models/resnet.py#L153-L200

I suspect that the speaker encoder is not producing any gradients, and the speaker consistency loss has no effect. It looks like this happens:

Could you please check on that?

To Reproduce

import torch

a = torch.tensor(1.0, requires_grad=True)
b = torch.tensor(2.0, requires_grad=True)

with torch.no_grad():
     c = a + b

d = c + 1
e = a + d
e.backward()

print(a.grad) # 1
print(b.grad) # not set

d.backward() # RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

Expected behavior

No response

Logs

No response

Environment

Just reading the code and asking :)

Additional context

No response

WeberJulian commented 1 year ago

Nice catch, that's concerning. Thanks for reporting it. We'll look more into it but it looks like you are right. This torch.no_grad doesn't affect the speaker encoder during training because there is no parameter before it. But in our case, using the original implementation for SCL, the TTS parameters are before this torch.no_grad. I guess in the paper the slightly better SECS score are explained by the extra training steps...

Edresson commented 1 year ago

Nice catch. Indeed it is an issue. I will submit a PR to fix it.

Tomiinek commented 1 year ago

Thank you @Edresson

Are you also planning to retrain the models and update the YourTTS paper at least on arxiv? :innocent:

Edresson commented 1 year ago

Thank you @Edresson

Are you also planning to retrain the models and update the YourTTS paper at least on arxiv? innocent

I think it is not worth because if we do it we will need to recompute the MOS and Sim-MOS. I'm thinking about update the preprint, removing Speaker Consistency Loss from the methodology. And given that the Speaker Consistency Loss had no effect on the results, Speaker Consistency Loss experiments are equal than keep the model training per more 50k steps. In addition, I will try to retracting this issue on ICML published paper as well. Fortunately, It is a minor issue and the reported results are not effected (only the method description that is wrong).

stale[bot] commented 1 year ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. You might also look our discussion channels.

Edresson commented 1 year ago

@Tomiinek Thanks so much for finding the bug and reporting it. I talked with all authors and the final decision was added a Erratum on YourTTS Github repository and on the last page of the preprint. It is done :).