VisionLearningGroup / DANCE

repository for Universal Domain Adaptation through Self-supervision
MIT License
122 stars 20 forks source link

some simple questions about released code #2

Closed datar001 closed 3 years ago

datar001 commented 3 years ago

Hi, thanks very much for submitting the good work. I have some simple questions about code in train_dance.py:

  1. in 188-189 lines : ### We do not use memory features present in mini-batch feat_mat[:, index_t] = -1 / conf.model.temp I know the meaning is calculating the similarity between min-batch and min-batch with current features not memory features, but what's the meaning about -1 / conf.model.temp?

  2. in 195-196 lines: loss_nc = conf.train.eta * entropy(torch.cat([out_t, feat_mat, feat_mat2], 1)) I can't understand what's the effect of direct connection of feat_mat and feat_mat2. why not put the feat_mat2 into the proper index position in feat_mat, as we know, the index of feat_t in different iteration is not same.

Thanks very much and hope to get your reply

ksaito-ut commented 3 years ago

Hi, thanks for your interest in our work.

  1. We just tried to fill in small values at the index_t. The value, -1 / conf.model.temp, will be a minimum of the scaled similarity.
  2. As you mention, "put the feat_mat2 into the proper index position in feat_mat" is one correct way of implementation. We just tried to implement that part simply and concatenate the matrix of feat_mat (deactivate mini-batch indexes) and feat_mat2 (similarity within mini-batch).
datar001 commented 3 years ago

Thanks for your reply. And i still have a very fool question. What is the difference between train_dance.py and train_class_inc_dance.py ? I’m a rookie in this field.:sweat_smile:

ksaito-ut commented 3 years ago

train_class_inc_dance.py is a script used for class incremental DA experiment (Table 5 in the paper).