Thank you very much for your work.
I have some questions:
when unc_noise is False, ie. the model is deepLab(13, False) in your code, is pretrained and it includes encoder and mian decoder right?
Are the parameters of the auxiliary decoder copied from the main decoder or initialized randomly?
When training, update the parameters of the encoder and the auxiliary decoder and keep only the parameters of the main decoder unchanged?
When training, the pseudo-label is generated by the pre-trained model, and it is not updated in the later training process, right?
I'd be grateful if you could answer it!
Params of auxilary decoder copied?: Not in the final version. I remember trying this out, but I don't think it mattered much.
Only params of aux decoders are trained: Yes. because the aux decoders are random initialized, they wont be able to predict labels correctly with the used feature representation.
Updating pseudolabels: They are updated after each epoch. I based my implementation of this on CRST, and you might find it useful to follow that too.
Thank you very much for your work. I have some questions: when unc_noise is False, ie. the model is deepLab(13, False) in your code, is pretrained and it includes encoder and mian decoder right? Are the parameters of the auxiliary decoder copied from the main decoder or initialized randomly? When training, update the parameters of the encoder and the auxiliary decoder and keep only the parameters of the main decoder unchanged? When training, the pseudo-label is generated by the pre-trained model, and it is not updated in the later training process, right? I'd be grateful if you could answer it!