Closed quito418 closed 1 year ago
We tried both of these update rules and found that the performance was similar when using pretrained models. And I think this is because the momentum coefficient is 0.999. Therefore, the impact of the current batch will not be great. But I don't know whether it behaves similarly under the standard SSL setting (no pretrained models).
Thank you for your reply!
Hi,
I am looking into the DebiasMatch code and found one thing confusing.
In the DebiasMatch code from this repo, it seems the same
qhat
is used for casual inference in weakly unlabeled data and adaptive marginal loss in strongly unlabeled data. https://github.com/thuml/Transfer-Learning-Library/blob/efe6e33caf5a6ea67f6e72135a2ac0ebf847742d/examples/semi_supervised_learning/image_classification/debiasmatch.py#L187In the code from authors, it seems
qhat
before update is used for casual inference, and updatedqhat
is used for adaptive marginal loss.Would it have difference in the result?
Thanks!