lucidrains / audiolm-pytorch

Implementation of AudioLM, a SOTA Language Modeling Approach to Audio Generation out of Google Research, in Pytorch
MIT License
2.39k stars 255 forks source link

Gradient Issue when Finetuning #255

Closed tysonjordan closed 8 months ago

tysonjordan commented 10 months ago

I am trying to finetune an instance of EncodecWrapper with my own model. I am noticing that despite the fact that .requires_grad() is true for all layers, .grad is None for all of the encoder layers only (the decoder layers are fine).

Any idea on what causes this and how I can update my encoder params?