[ArXiv+Supplementary] [IEEE Xplore RA-L 2021] [ICRA 2021 YouTube Video]
and
[ArXiv] [CVPR 2021 Workshop 3DVR]
Sequence-Based Hierarchical Visual Place Recognition.
Jan 27, 2024 : Download all pretrained models from here, Nordland dataset from here and precomputed descriptors from here
Jan 18, 2022 : MSLS training setup included.
Jan 07, 2022 : Single Image Vanilla NetVLAD feature extraction enabled.
Oct 13, 2021 : Oxford & Brisbane Day-Night pretrained models download link. (use the latest link provided above)
Aug 03, 2021 : Added Oxford dataset files and a direct link to download the Nordland dataset. (use the latest link provided above)
Jun 23, 2021: CVPR 2021 Workshop 3DVR paper, "SeqNetVLAD vs PointNetVLAD", now available on arXiv.
conda create -n seqnet numpy pytorch=1.8.0 torchvision tqdm scikit-learn faiss tensorboardx h5py -c pytorch -c conda-forge
Run [Please see download links at the top news from 27 Jan 2024]bash download.sh
to download single image NetVLAD descriptors (3.4 GB) for the Nordland-clean dataset [a] and the Oxford dataset (0.3 GB), and Nordland-trained model files (1.5 GB) [b]. Other pre-trained models for Oxford and Brisbane Day-Night can be downloaded from here.
To train sequential descriptors through SeqNet on the Nordland dataset:
python main.py --mode train --pooling seqnet --dataset nordland-sw --seqL 10 --w 5 --outDims 4096 --expName "w5"
or the Oxford dataset (set --dataset oxford-pnv
for pointnetvlad-like data split as described in the CVPR 2021 Workshop paper):
python main.py --mode train --pooling seqnet --dataset oxford-v1.0 --seqL 5 --w 3 --outDims 4096 --expName "w3"
or the MSLS dataset (specifying --msls_trainCity
and --msls_valCity
as default values):
python main.py --mode train --pooling seqnet --dataset msls --msls_trainCity melbourne --msls_valCity austin --seqL 5 --w 3 --outDims 4096 --expName "msls_w3"
To train transformed single descriptors through SeqNet:
python main.py --mode train --pooling seqnet --dataset nordland-sw --seqL 1 --w 1 --outDims 4096 --expName "w1"
On the Nordland dataset:
python main.py --mode test --pooling seqnet --dataset nordland-sf --seqL 5 --split test --resume ./data/runs/Jun03_15-22-44_l10_w5/
On the MSLS dataset (can change --msls_valCity
to melbourne
or austin
too):
python main.py --mode test --pooling seqnet --dataset msls --msls_valCity amman --seqL 5 --split test --resume ./data/runs/<modelName>/
The above will reproduce results for SeqNet (S5) as per Supp. Table III on Page 10.
The code in this repository is based on Nanne/pytorch-NetVlad. Thanks to Tobias Fischer for his contributions to this code during the development of our project QVPR/Patch-NetVLAD.
@article{garg2021seqnet,
title={SeqNet: Learning Descriptors for Sequence-based Hierarchical Place Recognition},
author={Garg, Sourav and Milford, Michael},
journal={IEEE Robotics and Automation Letters},
volume={6},
number={3},
pages={4305-4312},
year={2021},
publisher={IEEE},
doi={10.1109/LRA.2021.3067633}
}
@misc{garg2021seqnetvlad,
title={SeqNetVLAD vs PointNetVLAD: Image Sequence vs 3D Point Clouds for Day-Night Place Recognition},
author={Garg, Sourav and Milford, Michael},
howpublished={CVPR 2021 Workshop on 3D Vision and Robotics (3DVR)},
month={Jun},
year={2021},
}
SeqMatchNet (2021); Patch-NetVLAD (2021); Delta Descriptors (2020); CoarseHash (2020); seq2single (2019); LoST (2018)
[a] This is the clean version of the dataset that excludes images from the tunnels and red lights and can be downloaded from here.
[b] These will automatically save to ./data/
, you can modify this path in download.sh and get_datasets.py to specify your workdir.