Closed guangyaooo closed 3 years ago
I have tested the mAP from the released pre-trained model.(Notice that the test dataset in configuration needs to be modified, the test dataset in paper only use Caltech). And i get a 0.8247 mAP, still a little lower than the 0.83 in paper. Beside, the released pre-trained model seems to train 60 epochs, while released code trains for 30 epochs
I have tested the mAP from the released pre-trained model.(Notice that the test dataset in configuration needs to be modified, the test dataset in paper only use Caltech). And i get a 0.8247 mAP, still a little lower than the 0.83 in paper. Beside, the released pre-trained model seems to train 60 epochs, while released code trains for 30 epochs
Thank you very much, I'll try it again later. Did you test the TPR score for jde.uncertainty.pt
?
I have tested the mAP from the released pre-trained model.(Notice that the test dataset in configuration needs to be modified, the test dataset in paper only use Caltech). And i get a 0.8247 mAP, still a little lower than the 0.83 in paper. Beside, the released pre-trained model seems to train 60 epochs, while released code trains for 30 epochs
Thank you very much, I'll try it again later. Did you test the TPR score for
jde.uncertainty.pt
?
Excuse me ,What is your final TPR score and AP?thanks
Has anyone tested![image](https://user-images.githubusercontent.com/26927195/86094751-86cac680-bae3-11ea-94c1-a09ca3365e6f.png)
AP
andTPR
for the pre-trained modeljde.uncertainty.pt
? The results of my test areAP=80.54
andTPR@FAR =84
, which are quite different from those ofAP=83.0
andTPR=90.4
claimed in the paper. Except thebatch_size=8
All other hyperparameters were set as default. The GPU used for this result is TITAN V. Part of the output for Detection test:Part of the output for Embeding test:![image](https://user-images.githubusercontent.com/26927195/86094890-b37ede00-bae3-11ea-9b20-1ba55467fb18.png)