yxgeee / MMT

[ICLR-2020] Mutual Mean-Teaching: Pseudo Label Refinery for Unsupervised Domain Adaptation on Person Re-identification.
https://yxgeee.github.io/projects/mmt
MIT License
469 stars 73 forks source link

A question about clustering #42

Open CaptainPrice12 opened 3 years ago

CaptainPrice12 commented 3 years ago

Thank you for sharing this work!

Actually, I have a question about the clustering process in mmt_train_kmeans.py file.

dict_f, _ = extract_features(model_1_ema, cluster_loader, print_freq=50) cf_1 = torch.stack(list(dict_f.values())).numpy() dict_f, _ = extract_features(model_2_ema, cluster_loader, print_freq=50) cf_2 = torch.stack(list(dict_f.values())).numpy() cf = (cf_1+cf_2)/2

Here, I find mean-nets of model1 and model2 are used to generate features for clustering and initializing classifiers. But it seems that current models( model1 and model2) in every epoch are used instead of mean-net to compute features for clustering in the MMT paper. Does using mean-net here provide better performance? Could you give some explanations about it? Thanks!

yxgeee commented 3 years ago

Yes, we adopted mean-nets to extract features for clustering in each epoch. The features extracted by mean-nets are more robust. I did not remember the performance gaps. You could have a try to replace it with current nets.

CaptainPrice12 commented 3 years ago

Got it. Thank you so much for the reply! May I ask one more question?

In the OpenUnReID repo, the implementation of MMT should be MMT+, right? In that repo, source_pretrain/main.py uses only source domain data for pretraining, not like using both source and target(only forward) in the MMT repo.

Meanwhile, for target training, it uses both source(labeled) and target(pseudo label) to conduct MMT training in OpenUnReID with a MoCo-based loss and DSBN, right? Please correct me if there is a misunderstanding. Thanks!

yxgeee commented 3 years ago

MMT+ in OpenUnReID does not adopt source-domain pre-training. It uses two domains' images to conduct MMT training from scratch (ImageNet pre-trained). The MoCo loss is not adopted and DSBN is used. The MoCo loss only appears in the repo for VisDA challenge.

CaptainPrice12 commented 3 years ago

Thanks for the help!