Zhongying-Deng / DAC-Net

Pytorch implementation of DAC-Net ("Zhongying Deng, Kaiyang Zhou, Yongxin Yang, Tao Xiang. Domain Attention Consistency for Multi-Source Domain Adaptation. BMVC 2021")
19 stars 1 forks source link

the role of compactness loss #5

Closed zy199676 closed 2 years ago

zy199676 commented 2 years ago

hello, I don't quite understand the role of compactness loss , can you explain again? thank you

Zhongying-Deng commented 2 years ago

Thanks for your question. The MSDA aims to achieve good classification performance on target data. Good classification performance usually needs discriminative feature learning, i.e., inter-class separability and intra-class compactness. Therefore, the class compactness loss is proposed to pull together the target features and the classifier's weight vectors so that the target features can be more compact and meanwhile away from the decision boundary. As shown below, the red dots represent the target features which are pulled to the classifier's weight vector W. Note that the classifier's weight vector is chosen as the clustering center for two reasons. First , the decision boundary is usually far away from the decision boundary and second, it is less sensitive to noisy target peudo-labels. image

zy199676 commented 2 years ago

Thank you for your reply, can I try to use the classifier differences of target data to discriminate against the separability of the class ? just like this image

Zhongying-Deng commented 2 years ago

I assume yes. What you mentioned seems to be similar to MCD (Maximum Classifier Discrepancy for Unsupervised Domain Adaptation).

zy199676 commented 2 years ago

yes, I may do some research based on your work ,thank you !

zy199676 commented 2 years ago

In the code , I see that when different scales of feature layers do losses, do you directly add them together?

zy199676 commented 2 years ago

Can different fearture layers directly add MMD losses together? Can I switch the EMA loss calculation directly to MMD ?

image

Zhongying-Deng commented 2 years ago

For different feature layers, each of them can be used to calculate a DAC loss (a scalar loss value). Then these scalar loss values are summed together as the final DAC loss. For MMD loss, similar thing can be done (each feature layer for a scalar MMD loss value, then all these values are summed up). I think the EMA loss (I assume you meant the DAC loss?) can be replaced by the MMD loss, but I am not sure about its performance.

zy199676 commented 2 years ago

What do these data represent in the code below ? And can I change it to a single-source domain image

Zhongying-Deng commented 2 years ago

input_x and input_x2 are weakly-augmented and strongly-augmented source images respectively, with input_x[i] corresponding to input_x2[i] (i.e., they are from the same source image but with different augmentations, so they have the same class label label_x[i] and the same domain label domain_x[i]). According to the domain_x, you can get the data of a single domain, e.g., domain_x[i]==0 can help identify whether the i-th source image is from the 0-th souce domain or not.