Closed birdortyedi closed 4 years ago
Hi! Thank you for your interest in our work. We re-evaluated our model for Cars-196 with uploaded model weight and found that reported Recall@K is reproducible. We are planning to add detailed requirements with version specifications. Please note that below versions of packages are used to reproduce reported scores.
- python=3.6.9
- pytorch=1.2.0
- numpy=1.16.4
Hello!
First, thank you for sharing your work!
We tried to reproduce the results in the paper by using both your codebase and also ours+pytorch_metric_learning lib. All hyper-parameters are the same as the ones in the paper, but in training, we can only reach 79% R@1 for Cars-196 dataset. When using your trained weights, R@1: 81.48%, a little bit far from the reported results (~86%).
By the way, for CUB-200 dataset, we achieved to reproduce the results by training with the same hyper-parameters ~69% R@1 and inferring your trained weights.
In the paper, referred that hyper-parameters for CUB-200 and Cars-196 are the same. Is there any hyper-parameter that differs for the training of these datasets? Also, could you please check the trained weights for Cars-196?
Thanks!