yxgeee / MMT

[ICLR-2020] Mutual Mean-Teaching: Pseudo Label Refinery for Unsupervised Domain Adaptation on Person Re-identification.
https://yxgeee.github.io/projects/mmt
MIT License
472 stars 73 forks source link

Pre-training on the source domain #26

Closed yjh576 closed 4 years ago

yjh576 commented 4 years ago

Hi, I have some question. In the processing of pre-training on the source domain, there are two lines codes in the PreTrainer of Trainer: s_features, s_cls_out = self.model(s_inputs)

target samples: only forward

        t_features, _ = self.model(t_inputs)

        # backward main #

        loss_ce, loss_tr, prec1 = self._forward(s_features, s_cls_out, targets)
        loss = loss_ce + loss_tr

I think that the first line code is necessary for the overall optimization process and the three line code is not necessaty, which is not related to the overall loss. However, the third line code can boost the performance on target dataset. As you pointed out, this is only forward. I don't understand it. Can you give me help? Thank you.

yxgeee commented 4 years ago

Please refer to https://github.com/yxgeee/MMT/issues/17

yjh576 commented 4 years ago

I see. Thanks.