1KK077 / IDKL

13 stars 3 forks source link

Regarding the issue of using rerank in the code #4

Open xl-Zzzzzzz opened 5 months ago

xl-Zzzzzzz commented 5 months ago

First of all, thank you for your contribution to VI-ReID. We have carefully read your article and reproduced the work using the code you provided. We have found that you used rerank in your code, specifically in your testing on the SYSU-MM01 and LLCM datasets, but you did not mention it in the preprint. As is well known, work in the VI ReID field generally does not use the results after rerank as a comparative indicator. We hope you can explain this.

alehdaghi commented 5 months ago

Did you get the same results as the paper?

xl-Zzzzzzz commented 5 months ago

你得到的结果和论文一样吗?

Our reproducible results show that in the SYSU dataset, the performance with rerank is approximately 81%, while without rerank, it is only about 71%.

xyz999-skech commented 5 months ago

We set the rerank parameter to false, the rank-1 is only about 71% on SYSU dataset.

1KK077 commented 5 months ago

Hello, thank you very much for your concern. This code mainly provides the main module functions of the paper for reference, and is not a complete code. Some tricks are omitted and optimizer hyperparameters are modified, etc. The actual code does not use rerank and is based on another SOTA model to build the baseline. The complete code will be released soon. Thank you for your patience.

echo9958 commented 4 months ago

Hello, thank you very much for your concern. This code mainly provides the main module functions of the paper for reference, and is not a complete code. Some tricks are omitted and optimizer hyperparameters are modified, etc. The actual code does not use rerank and is based on another SOTA model to build the baseline. The complete code will be released soon. Thank you for your patience.

If the baseline was constructed based on another SOTA model, are the results of the ablation experiments in Table 4 in the preprint using two different baselines? Or is the accuracy of the SOTA model 66.47% mAP? This is very confusing to me and I hope you can explain.

pic
1KK077 commented 4 months ago

Hello, thank you very much for your concern. This code mainly provides the main module functions of the paper for reference, and is not a complete code. Some tricks are omitted and optimizer hyperparameters are modified, etc. The actual code does not use rerank and is based on another SOTA model to build the baseline. The complete code will be released soon. Thank you for your patience.

If the baseline was constructed based on another SOTA model, are the results of the ablation experiments in Table 4 in the preprint using two different baselines? Or is the accuracy of the SOTA model 66.47% mAP? This is very confusing to me and I hope you can explain. pic

Hello, thank you very much for your concern. The 66.47% mAP reported in Table 4 represents the baseline result of ResNet-50. However, we enhanced this baseline using several SOTA tricks. These include the channel exchange trick from Mang Ye et al., the ResNet block attention enhancement trick from Yukang Zhang et al., our own trick of Adam regularization parameter fitting boundary training, and local image enhancement methods, among others. And the Adam learning rate is 3.5 * 10^-5, without special modifications. We hope this clarifies your confusion.