SHI-Labs / Self-Similarity-Grouping

Self-similarity Grouping: A Simple Unsupervised Cross Domain Adaptation Approach for Person Re-identification (ICCV 2019, Oral)
187 stars 43 forks source link

Market2Duke results #2

Open jh97321 opened 5 years ago

jh97321 commented 5 years ago

I runned the code in Market2Duke and Duke2Market, the result of Duke2Market is a match to the reported numbers while the result of Market2Duke has a drop in performance. The result is showed as below.(I runned on Linux LTS 16.04 with pytorch 0.4.0 and python3.6) |SSG method| rank-1 | mAP | | reported |73.0% |53.4% | | observed |70.2% | 49.8% |

|SSG++ method| rank-1 | mAP | | reported | 76.0% | 60.3% | | observed | 72.7% | 53.7% | No change was been made to the training codes, can you please give me some advice about what the reasons probably be? Thank you.

OasisYang commented 5 years ago

Sorry I don't know the reasons, can you reproduce the performance using the provide models? You need use provided model as pre-trained model and make sure the num-split is two.

jh97321 commented 5 years ago

The num-split in my settings is two. I use source_train.py to get the pre-trained model, it works in Duke2Market, but doesn't work in Market2Duke. I will try the provide models. Thank you anyway.

geyutang commented 5 years ago

@jh97321 I also have the same problem with you.

In addition, my D2M result also drops.

Have you solved your problem? @OasisYang Any suggestion about this? Thanks!

OasisYang commented 5 years ago

If you cannot load the pretrained model, this link maybe helpful. And please make sure you train our codes on Two GPUs.

geyutang commented 5 years ago

Ok, I will try this. Thanks! In addition, I have another problem with the DBSCAN algorithm for UDA person re-id.

why we need both the source and target sample distance to calculation self-label for the target sample by DBSCAN. I read the origin DBSCAN paper the sklearn API and found that the input of this clustering algorithm is the feature or distance matrix.

I am confused about this! Any suggestion? Thanks!

Alan-Paul commented 5 years ago

I encountered the same problems . I run the selftraining.py to train an duke2market model using your pretrained model, however, the results drop in performance. The final results is mAP : 54.0% , rank1 : 76.7% , as the results reported in your paper is mAP: 58.3%, rank1 : 80%. Here are my parameters . Python environment is pytorch 1.1.0 , python 3.6.0. Any suggestions will be appreciated !!
''' arch='resnet50', batch_size=128, combine_trainval=False, data_dir='./data', dce_loss=False, dist_metric='euclidean', dropout=0, epochs=70, evaluate=False, features=128, gpu_devices='0,1', height=None, iteration=30, lambda_value=0.1, load_dist=False, logs_dir='logs/duke2market', lr=6e-05, margin=0.5, no_rerank=False, num_instances=4, num_split=2, print_freq=20, resume='logs/pretrained_models/dukemtmc_trained.pth.tar', rho=0.0016, seed=1, split=0, src_dataset='dukemtmc', start_save=0, tgt_dataset='market1501', weight_decay=0.0005, width=None, workers=4 '''

geyutang commented 5 years ago

@Alan-Paul , I got the same result with you. And I try to re-train on the dukemtmc dataset using the source_train.py. Same result. The performance drop exists during duke->market.

OasisYang commented 5 years ago

@geyutang @Alan-Paul There're some suggestions. First, check if the performance of our provide model is same as reported in the paper. Also, check the performance of pretrained model, which should be mAP:26, R1:54 when transfer from Duke to Market (market2duke: 16/30). And I conducted all experiment with pytorch=0.4, torchvision=0.2 and scikit-learn=0.19.1. I hope these suggestions can help us.

geyutang commented 5 years ago

The D2M result at the beginning is right.

Mean AP: 26.8% CMC Scores market1501 top-1 54.2% top-5 70.5% top-10 76.8%

But the model enters saturation from 10 epoch. Following is my log on training rank1 with the iteration. It looks like overfitting. In addition, slightly modifies the learning rate, the result doesn't achieve that reported in your paper. Any suggestion for solving this model saturation problem? image

Also, my torch version is 1.0.0, this may lead to the results mismatch. Thanks for your kindly reply.

OasisYang commented 4 years ago

I trained model again with Pytorch=0.4.1, then I got the adaptation result from Market to Duke, as 53.3/72.4(mAP/R1), it's almost the same with the results reported in the paper.

yihongXU commented 4 years ago

I trained model again with Pytorch=0.4.1, then I got the adaptation result from Market to Duke, as 53.3/72.4(mAP/R1), it's almost the same with the results reported in the paper.

Hi, Did you try Duck-> Market? it seems to me that we have difficulty to get 58.3/80.0 (mAP/R1), I've got 52.6/75.7(mAP/R1) instead. Thank you.

OasisYang commented 4 years ago

I will try it but it make take some time since most of computation resource is used for another ongoing project.

beiyangxiaolaodi commented 3 years ago

I runned the code in Market2Duke
I use pytorch 0.4.1 but the result of Market2Duke still has a drop in performance. |SSG method| rank-1 | mAP | | observed |68.7% | 49.2% | | reported |73.0% |53.4% |

can you help?thanks