I'd just like to confirm whether the student model actually uses the labelled data as part of training along with the pseudo-labelled data. This is because in the paper the diagram only shows strongly augmented data passed into the student model, but in the code there is a forward pass for the student model that also takes labelled data with both weak and strong augmentations. For your references:
This shows that weakly augmented unlabelled data is for the teacher branch.
This shows that Lsup is from burn-in, and Lunsup is from the supervision of pseudo-labels.
This shows that student model has a forward pass given the labelled data (in which the forward pass is not for the burn-in)
Thanks for asking. Yes, we input both weak and strong into the student model in the code.
And we also tried to only use strong-augmented images, and we got similar results.
Hello @ycliu93, Thanks for your work.
I'd just like to confirm whether the student model actually uses the labelled data as part of training along with the pseudo-labelled data. This is because in the paper the diagram only shows strongly augmented data passed into the student model, but in the code there is a forward pass for the student model that also takes labelled data with both weak and strong augmentations. For your references:
This shows that weakly augmented unlabelled data is for the teacher branch.
This shows that Lsup is from burn-in, and Lunsup is from the supervision of pseudo-labels.
This shows that student model has a forward pass given the labelled data (in which the forward pass is not for the burn-in)
Thanks very much.