yl4579 / StarGANv2-VC

StarGANv2-VC: A Diverse, Unsupervised, Non-parallel Framework for Natural-Sounding Voice Conversion
MIT License
466 stars 110 forks source link

Why is ASR model goes to train mode in the training loop #72

Open amitaie opened 1 year ago

amitaie commented 1 year ago

Hey, I saw that the ASR model that is under "model" also transfer to train mode in the beginning of the training loop, why is that? I tried to leave it in eval mode as it is in the initialisation but I got an error.

yl4579 commented 1 year ago

It is in eval model all the way long: https://github.com/yl4579/StarGANv2-VC/blob/main/train.py#L85

amitaie commented 1 year ago

But if I understood correctly the ASR model is part of the model that comes back from build_model method, and in the training loop goes back to train mode: https://github.com/yl4579/StarGANv2-VC/blob/main/trainer.py#L156

yl4579 commented 1 year ago

I think you are right. That's probably a mistake. What was the error you got?

amitaie commented 1 year ago

i'm not using the same code, i did a lot of changes in order to insert it to my repo and way of work, i'll try to reproduce it on the origin code, but I think it will be the same error which is:

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [9, 256, 96]], which is output 0 of ReluBackward0, is at version 1; expected version 0 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!

On the same subject, isn't the F0 model should be on eval mode as well?

yl4579 commented 1 year ago

I believe there is no difference between the train and eval mode for the ASR model, at least the part we are using here. The part we are using (the CNN part) has no batch norm or dropout. For F0 model, it does make a difference, so does this problem happen as well when you set F0 model to eval mode?

amitaie commented 1 year ago

I will run few checks on the F0 model and the ASR model and report the insights that i have. But the ASR model does use dropout, and also normalization (group norm, not batch norm). the CNN use ConvBlock which has both of them - https://github.com/yl4579/StarGANv2-VC/blob/main/Utils/ASR/layers.py#L105

yl4579 commented 1 year ago

I think you are right, though the train/eval mode does not affect group norm. It does affect dropout though, so you can set dropout to 0 without changing the train/eval mode. For the F0 model, it might be more difficult to fix. You will have to set those batch norm layers specifically to eval mode if setting the entire model to eval mode doesn't work. Let me know if it works.

amitaie commented 1 year ago

Took me some time but I have some results. I manged to fix the bug and change the ASR model to eval, i needed to fix small in-place line in the ASR code. I trained few models for examining the differences between running on eval or not. Mainly there is two things, the first is the change of the dropout/batchnorm (eval mode) and second the compute of the grads.

  1. Changing the F0 and the ASR to eval and no_grads saved me abut 10%~ of running time and cuda memory, that I think mainly because of the no_grads.
  2. Changing the ASR to eval mode caused the loss to converge to ~6 while with no eval it converged to ~10 (!), from listening to the results after 150 epochs i didn't heard any differences but need to explore it on cases that are more on the edge of the model.

here some tensorboard results: image

yl4579 commented 1 year ago

Thanks for letting me know. Can you make a pull request to modify these things for this repo? Or maybe indicate where the problem is, and I can make the fix.

mayank-git-hub commented 1 year ago

I have created a pull request addressing the issues with the ASR model. Putting the JDC network under eval mode is not so trivial and requires setting each individual to eval mode as mentioned by @yl4579 .

mayank-git-hub commented 1 year ago

Setting the dropouts to 0 does not have audible changes when working with speech signals (Except for the changes in the loss values) but does indeed have improvements when working with other modalities.