Hi, thanks for the nice code~
I have some questions about the pseudo-labels training part of the code,
this is the pseudo labeling loss of the source data as Eq.11 in paper:
loss_seg1 = self.update_variance(labels, pred1, pred2), in which the labels come from the source domain.
the target domain does not use the pseudo-labels but the entropy minimization:
loss_kl = ( self.kl_loss(self.log_sm(pred_target2) , mean_pred) + self.kl_loss(self.log_sm(pred_target1) , mean_pred))/(nhw)
Hi, thanks for the nice code~ I have some questions about the pseudo-labels training part of the code,
this is the pseudo labeling loss of the source data as Eq.11 in paper: loss_seg1 = self.update_variance(labels, pred1, pred2), in which the labels come from the source domain.
the target domain does not use the pseudo-labels but the entropy minimization: loss_kl = ( self.kl_loss(self.log_sm(pred_target2) , mean_pred) + self.kl_loss(self.log_sm(pred_target1) , mean_pred))/(nhw)