Closed UsmanMaqbool closed 1 year ago
The R@1=85.1 result is using the NetVLAD model in MATLAB, produced by the original NetVLAD paper. To replicate that value you'll need to install NetVLAD on MATLAB using their codebase (https://github.com/Relja/netvlad). To ensure fair benchmarking in our paper, we made sure to only use the codebases provided by the original authors themselves for all benchmarking results, this is the case for AP-GeM, DenseVLAD, NetVLAD, SuperGlue and DELG.
The R@1=83.7 that you are getting with our Pitts network is exactly correct.
In order to make this repository and implementation of NetVLAD a holistic package, we added the functionality to run NetVLAD by itself and re-trained a network on the Pitts30k dataset for this purpose using Pytorch. We noticed that we were unable to attain exactly the same performance as the original NetVLAD, potentially due to the different training architecture (MatConvNet vs PyTorch). However we do perform better than the Nanne version.
Thanks a lot for your response and made me clear. Since I was struggling with difference PyTorch and cuda versions, batch sizes and no. of threads etc., to achieve 85.1 :P
The official recalls are
85.1 92.2 94.4
for the NetVLAD. Since, you participated in adding PCA layer to the model and testing on the pitts30K results. Unfortunately, I couldn't achieve the results.My ENV: I tested the code on
Torch 1.12 + Cuda 11.6
andTorch LTS 1.8 with Cuda 11.1
. GPU: 3090TiUsing Your pretrained models
Download:
Test I updated the
resumepath
in theperformance.ini
file and this is how I tested the the netvlad performance.Result: Official result of Netvlad on Pitts30K test are
85.1 92.2 94.4
, however I could get these result.Using Nanne's pretrained model and adding PCA
Download: link
Add PCA: I updated
cluster size = 64.
in thetrain.ini
before run as followsResult, I run the same process and got test results as follow.
Could you please help / suggest something? I'll be thankful.
Usman