I am trying to finetune an instance of EncodecWrapper with my own model. I am noticing that despite the fact that .requires_grad() is true for all layers, .grad is None for all of the encoder layers only (the decoder layers are fine).
Any idea on what causes this and how I can update my encoder params?
I am trying to finetune an instance of EncodecWrapper with my own model. I am noticing that despite the fact that .requires_grad() is true for all layers, .grad is None for all of the encoder layers only (the decoder layers are fine).
Any idea on what causes this and how I can update my encoder params?