Closed hasanfarooq7 closed 1 year ago
The decoders are updated in the training process. https://github.com/median-research-group/LibMTL/blob/0aaada50cd609b39c65553d4c2760c18b02d8e74/LibMTL/weighting/abstract_weighting.py#L43
Thanks for pointing out. Yes i understand its there but can you check and confirm that decoder weights are actually being updated during training iterations for your datasets?
Sure.
Thanks a lot for confirmation. It will be issue in my implementation. I am closing the issue.
Hi, Thank you for putting up a such a fantastic MTL experimentation library. I used it for my own datasets and all looked good except when i observed weights of encoder/decoders during training rounds and it seems that only encoder weights get updated after any iteration or epochs but decoders weights remain same. When i just freeze the encoder layers (self.model.encoder.requiresgrad(False))and not the decoders layers than the training loss remain same in all iterations/epochs which means that decoders weights are not updating during training rounds. I tried with HPS architecture and EW weighting. Kindly can you help to debug what can contribute to this issue?