asteroid-team / asteroid

The PyTorch-based audio source separation toolkit for researchers
https://asteroid-team.github.io/
MIT License
2.21k stars 419 forks source link

Input and output mean scales updated during training (X-UMX) #685

Closed DavidDiazGuerra closed 5 months ago

DavidDiazGuerra commented 9 months ago

Hello,

I've just realized that the elements of mean_scale in X-UMX are optimized during the training since they're registered as parameters. As far as I understand, that dictionary contains the pre-computed mean and std of the dataset for input normalization. Are they really supposed to be optimized during training? Shouldn't be registered as buffers rather than as parameters?

Best, David

mpariente commented 9 months ago

Hello !

I don't master this architecture, people from Sony do : @r-sawata WDYT ?

r-sawata commented 9 months ago

Sorry to have kept you waiting. I was busy due to CVPR deadline, and it has finally gone yesterday. I'll check this in a few days.

DavidDiazGuerra commented 9 months ago

The easiest solution would be using requires_grad=False as done with the STFT window, but this can generate problems if people try to do transfer learning or fine-tuning and start freezing/unfreezing parts of the model without being aware of this (I've recently gone through this problem and was the reason I found this bug).

I would suggest registering both the scalers and the STFT window as buffers instead of as parameters since that's the general recommendation in Pytorch for model's tensors that are not being optimized. Doing this with the STFT window is quite straightforward (I can do a PR with a fix I've been testing that works well and is backward compatible with pre-trained models) but I'm unsure about how to do this with the scales since Pytorch doesn't have a nn.BufferDict function to replace nn.ParameterDict.

r-sawata commented 5 months ago

Sorry for being so late. I confirmed it carefully, and found that mean and scale with requires_grad=True is fine. Namely, optimizing them during training is our intention.

As you may know, this X-UMX is the extended version of the original one, Open-Unmix (UMX). As shown in the initialization of their implementation, input_mean and input_scale should be learned during training. Please see here: https://github.com/sigsep/open-unmix-pytorch/blob/4318fb278e1863f4cf8556b513987faf14a15832/openunmix/model.py#L84-L95

DavidDiazGuerra commented 5 months ago

Oh, okay. That's a bit weird to me, but if it works and the original UMX also did it that way I guess it's better to keep it like that.

Thanks, David