Closed DonMuv closed 2 years ago
Hi, We use Pitts as the training set - see Section 4.1 of our paper: "We train the underlying vanilla NetVLAD feature extractor [3] on two datasets: Pittsburgh 30k [80] for urban imagery (Pittsburgh and Tokyo datasets), and Mapillary Street Level Sequences [82] for all other conditions."
Some of Tokyo247 query images are 2448*3264. How do you process these images, do you rotate them or compress them?
Hi @DonMuv, we resize all our images to a uniform size of 640x480 (width by height), including Tokyo247 query images. This resizing happens as part of the Pytorch transforms part of the dataloader.
HI, When doing experiments on the Tokoy dataset, TimeMachine is used for the training set, Tokoy24/7 is used for the test set, is that right?