Closed Parskatt closed 1 year ago
This large number of images also is somewhat inconvenient purely in size, downloading the precomputed depth + the testset took over 100GB, which is quite substantial.
Ok, so in the dataloader: https://github.com/nianticlabs/map-free-reloc/blob/1df14c18582b6bff3896144a8243811a322409f9/lib/datasets/mapfree.py#L173 There already seems to be stride = 5 Is this correct, in that case, when I cache matches, do I only need to compute [::5] ?
So if I change this: https://github.com/nianticlabs/map-free-reloc/blob/1df14c18582b6bff3896144a8243811a322409f9/etc/feature_matching_baselines/compute.py#L80 to use [::5], that should produce correct results?
Not quite, also had to change it in the dataloader (duh).
Indeed we sample the test/validation frames by a factor of 5. However, we still compute correspondences between the reference frame and every query image, without the subsampling. Your proposed change in compute.py
would address this, but you'd need to make sure that the correspondence loader
https://github.com/nianticlabs/map-free-reloc/blob/1df14c18582b6bff3896144a8243811a322409f9/lib/models/matching/feature_matching.py#L41
would load the correspondences for the correct frame.
I believe this could be accomplished by replacing the line
https://github.com/nianticlabs/map-free-reloc/blob/1df14c18582b6bff3896144a8243811a322409f9/lib/datasets/mapfree.py#L155
with
'pair_id': index,
We also provide pre-computed correspondences in case you are using SIFT/SG/LoFTR, but if you are using another method you will need to recompute them.
Thanks :)
I can help but notice that each scene has a lot of images that are very correlated. This leads to evaluation on my local machine taking multiple hours (about 1 sec for matching, and 1 sec for ransac).
Considering the images in the query sequences are very correlated, simply taking every 10th frame ought to give almost identical results.
I get that I could do this manually for the train/val sets, but I'm simply interested in eval for benchmarking.