Hi,
I want to verify with the author that the model performance reported in the paper is always by updating the whole backbone parameters right? Otherwise, line 198 in unsupervise_adapt.py looks like a tiny bug to me.
We update all trainable parameters as we claimed in paper Sec 4.1 implementation.
Unlike [45, 66], all the trainable layers are updated and no special selection is required in our method.
Hi, I want to verify with the author that the model performance reported in the paper is always by updating the whole backbone parameters right? Otherwise, line 198 in unsupervise_adapt.py looks like a tiny bug to me.