VisionLearningGroup / DANCE

repository for Universal Domain Adaptation through Self-supervision
MIT License
122 stars 20 forks source link

`feat_mat2` in `loss_nc` #5

Closed lqxisok closed 3 years ago

lqxisok commented 3 years ago

After carefully checking the training code, I have a little question about the feat_mat2 in loss_nc.

How does the feat_mat2 help the performance of neighborhood clustering?

I see when computing the loss_nc the entropy was fed with the concatenation of three values which are out_t, feat_mat and feat_mat2. It turns out that the out_t and feat_mat denote the $W$ and $V$ respectively illustrated by the Sec. 3.2. So I guess the feat_mat2 has the same influence with feat_mat. But it only works in the mini batch data, does it? Is there any reasonable explanation about it? Thanks in advance.

ksaito-ut commented 3 years ago

Sorry for the late reply. feat_mat2 is computed on the mini-batch samples. They provide "live" features as opposed to memory features.

Memory features are not directly updated to minimize the loss while the mini-batch features can be influenced by the loss. In this sense, using feat_mat2 can make a difference.