Closed Velpro-collab closed 4 months ago
It seems confusing. What's the detailed processes of your retraining? Can you get reported results by directly using our provided ckpt?
Hello, author. I'm sorry for not being able to respond to your message in time due to being very busy.
We are retraining to replicate the performance when only using fake samples generated by PairGAN, without using auxiliary samples. We set --fake
to true, modified the dataset path, and used the default settings for everything else. We also directly tested using the pretrained parameters ckpt_res34_fake.pth
that you published. The test results are as follows:
load from /media/lele/c/zuozhigang/BUPTC/BUPTCampus_ckpt_feat/ckpt/ckpt_res34_fake.pth... Frame Sample: uniform ===> MCPRL-ReID Dataset (query) <=== Number of identities: 1076 Number of samples : 1076 ===> MCPRL-ReID Dataset (gallery) <=== Number of identities: 1076 Number of samples : 4844 ========== Testing ========== 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 17/17 [00:06<00:00, 2.60it/s] 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 75/75 [00:19<00:00, 3.77it/s] RGB->RGB: mAP: 61.18% | Rank-1: 66.02% | Rank-5: 84.77% | Rank-10: 88.87% | Rank-20: 91.60%. RGB->IR : mAP: 49.85% | Rank-1: 53.71% | Rank-5: 75.20% | Rank-10: 81.25% | Rank-20: 87.11%. IR ->RGB: mAP: 51.84% | Rank-1: 53.45% | Rank-5: 73.18% | Rank-10: 80.08% | Rank-20: 85.82%. IR ->IR : mAP: 63.93% | Rank-1: 69.22% | Rank-5: 83.96% | Rank-10: 86.94% | Rank-20: 89.55%. AllModal: mAP: 53.14% | Rank-1: 66.70% | Rank-5: 83.40% | Rank-10: 87.40% | Rank-20: 89.79%.
Hi, there seems a misunderstanding because of my unclear description.
To reproduce our results, you should run test.py
two times, i.e. with --fake
and without --fake
.
We use the concatenated features from these two runs for evaluation as follows:
https://github.com/dyhBUPT/BUPTCampus/blob/db1f179292eda2ed4ef280d330a850c9a53fb44a/re_ranking.py#L137-L140
Hi, there seems a misunderstanding because of my unclear description. To reproduce our results, you should run
test.py
two times, i.e. with--fake
and without--fake
. We use the concatenated features from these two runs for evaluation as follows:
Thank you for your patient guidance. The issue has now been successfully resolved.
Dear Author, We have designed a Video-based Visible-Infrared Person Re-Identification method and look forward to validating its performance on your dataset. We also aim to compare our results with those obtained from your proposed method. However, we did not use the auxiliary samples included in the dataset for training and have tried retraining your method by discarding the auxiliary samples. The results we obtained differ significantly from those reported in your paper. We would like to confirm with you if there are any particular issues or considerations to keep in mind when training or testing with the published code. Below are the testing results we obtained after retraining without auxiliary samples: Infrared to Visible retrieval results: Rank-1(53.64), Rank-5(74.33), Rank-10(81.80), mAP(51.75) Visible to Infrared retrieval results: Rank-1(55.66), Rank-5(75.20), Rank-10(80.27), mAP(51.06)
We look forward to receiving your reply. Thanks very much for your time.