yxgeee / SpCL

[NeurIPS-2020] Self-paced Contrastive Learning with Hybrid Memory for Domain Adaptive Object Re-ID.
https://yxgeee.github.io/projects/spcl
MIT License
316 stars 66 forks source link

cannot reproduce the results in the paper #6

Closed shuxjweb closed 4 years ago

shuxjweb commented 4 years ago

Thanks for your insightful work. I have runned the uda code for duke to market. The result is rank1: 86.9%, mAP:71.6, which are 3.39% less than that in the paper (rank1: 90.3%, map: 76.7%). Why?

yxgeee commented 4 years ago

How many GPUs did you use?

yxgeee commented 4 years ago

The number of GPUs as well as the number of batch_size on each GPU indeed affect the final performance. (https://github.com/yxgeee/SpCL/issues/1) It is an open question in both fully-supervised and unsupervised re-ID tasks. Maybe it is due to some specific features on re-ID datasets.

yxgeee commented 4 years ago

If you adopted all the same settings as I provided in the scripts (e.g. 4 GPUs, 64 batch_size, etc.), but still cannot reproduce the results. Please tell me and I will run this code again to find the issue. By the way, you could also try to use our OpenUnReID codebase, which also supports SpCL algorithm and achieves better performances.

WangWenhao0716 commented 4 years ago

The results can be reproduced for me. Sometimes the results are a little smaller than reported (75.XX) or a little bigger than reported (77.XX).