QVPR / Patch-NetVLAD

Code for the CVPR2021 paper "Patch-NetVLAD: Multi-Scale Fusion of Locally-Global Descriptors for Place Recognition"
MIT License
517 stars 72 forks source link

About Tokoy dataset #40

Closed DonMuv closed 2 years ago

DonMuv commented 2 years ago

HI, When doing experiments on the Tokoy dataset, TimeMachine is used for the training set, Tokoy24/7 is used for the test set, is that right?

Tobias-Fischer commented 2 years ago

Hi, We use Pitts as the training set - see Section 4.1 of our paper: "We train the underlying vanilla NetVLAD feature extractor [3] on two datasets: Pittsburgh 30k [80] for urban imagery (Pittsburgh and Tokyo datasets), and Mapillary Street Level Sequences [82] for all other conditions."

DonMuv commented 2 years ago

Some of Tokyo247 query images are 2448*3264. How do you process these images, do you rotate them or compress them?

StephenHausler commented 2 years ago

Hi @DonMuv, we resize all our images to a uniform size of 640x480 (width by height), including Tokyo247 query images. This resizing happens as part of the Pytorch transforms part of the dataloader.