QVPR / Patch-NetVLAD

Code for the CVPR2021 paper "Patch-NetVLAD: Multi-Scale Fusion of Locally-Global Descriptors for Place Recognition"
MIT License
525 stars 74 forks source link

Reproduce results of NetVLAD on RobotCar Seasons v2 #60

Closed MAX-OTW closed 2 years ago

MAX-OTW commented 2 years ago

@Tobias-Fischer @oravus Hi,first,I really appreciate for your work. I'm trying to reproduce results of NetVLAD on RobotCar Seasons v2 in the Table 1 and Supplementary Table 1,but encountered some problems.I have extracted the global descriptor using Netvlad with netvlad_extract.ini.May I ask if it is convenient to supplement the subsequent code to facilitate the reoccurrence work?Perhaps it would be better to have detailed instructions. I am always looking forward to your kind response. Best regards.

StephenHausler commented 2 years ago

Hi @MAX-OTW, this is something I've been working on, I will try to release some code to replicate the entire reproduction of Patch-NetVLAD on the Visual Localization Challenge server soon.

In the meantime, I can offer the following suggestions: To evaluate Patch-NetVLAD on RobotCar Seasons, to populate the query poses we copy the corresponding reference pose for the best retrieved reference image. Basically you need to write a function that copies the 6dof pose across, so your input to the function will be a predictions file and the output will be a pose file. The format per line of this file looks like: "left/1418721439517734.jpg 0.752168 -0.640854 -0.153381 0.00493814 -78.7556 -25.9724 54.9404"

Also have a look at this repo, which might help: https://github.com/cvg/Hierarchical-Localization (this repo has available code to create these submission files).

MAX-OTW commented 2 years ago

Hi @StephenHausler ,thank you for your reply and work!Could you please release the code and specific steps for reproducing the results of NetVlAD algorithm on RobotCar Seasons V2? I am currently preparing to reproduce this in my work. Could you please help me? Thank you very much and look forward to receiving your reply soon!

StephenHausler commented 2 years ago

Unfortunately due to other deadlines I'm probably not going to get around creating the full reproduction scripts for a few weeks. In the meantime, I can attach here some code which will create the challenge submission file (note, you'll need to install Kapture to use this file). You also need to download RobotCar Seasons locally and structure the dataset using the Kapture file format, unfortunately this may not be trivial. The script has an input arg -p, which is used to input the pairsfile produced by the code in our repo. I've pasted the code below:

import argparse import os from collections import defaultdict import pathlib

import kapture import kapture.io.csv as csv

if name == 'main': parser = argparse.ArgumentParser( description='use pairsfiles and kaptures for pose estimation using image retrieval') parser.add_argument('-p', '--pairsfile-path', type=str, required=True, help='path to pairsfile') parser.add_argument('-m', '--mapping-path', type=str, required=True, help='path to mapping kapture') parser.add_argument('-o', '--output-path', type=str, required=True, help='path to output LTVL challenge file') parser.add_argument('-d', '--decreasing', action='store_true', help='set if descending scores indicate a better match') parser.add_argument('-i', '--inverse', action='store_true', help='invert poses before recording them down in output file') args = parser.parse_args()

kdata_mapping = csv.kapture_from_dir(args.mapping_path)
if kdata_mapping.rigs:
    mytraj = kdata_mapping.trajectories
    myrigs = kdata_mapping.rigs
    kapture.rigs_remove_inplace(mytraj, myrigs)

with open(args.pairsfile_path, 'r') as f:
    image_pairs = csv.table_from_file(f)

query_lookup = defaultdict(list)

for query, mapping, score in image_pairs:
    query_lookup[query] += [(mapping, score)]
    # query_lookup[query] += [mapping]

# locate best match using pairsfile
best_match_pairs = []
for query, retrieved_mapping in query_lookup.items():
    if args.decreasing:
        best_match = min(retrieved_mapping, key=lambda x: x[1])[0]
    else:
        best_match = max(retrieved_mapping, key=lambda x: x[1])[0]
    best_match_pairs += [(query, best_match)]

# recover pose from best match
fname_to_pose_lookup = {}
for ts, cam, fname in kapture.flatten(kdata_mapping.records_camera):
    fname_to_pose_lookup[fname] = mytraj[ts, cam]
image_poses = {query: fname_to_pose_lookup[mapping] for query, mapping in best_match_pairs}

# LTVT2020
p = pathlib.Path(args.output_path)
os.makedirs(str(p.parent.resolve()), exist_ok=True)
with open(args.output_path, 'wt') as f:
    for image_filename, pose in image_poses.items():
        if args.inverse:
            pose = pose.inverse()
        line = [image_filename] + pose.r_raw + pose.t_raw
        line = ' '.join(str(v) for v in line) + '\n'
        f.write(line)
MAX-OTW commented 2 years ago

Hello, @Tobias-Fischer @StephenHausler ,I wonder where Mapillary (challenge/test) results are evaluated in Patch-netvlad paper?Can you tell me how I should evaluate, and of course there are specific steps that would be better.

Hope to get your reply as soon as possible. Thanks & Best Regards!

StephenHausler commented 2 years ago

Hi @MAX-OTW, the Mapillary results are found in Table 1, in the columns Mapillary (Challenge) and Mapillary (Val. set). Here are the city allocations for the train, val and test (challenge) sets: 'train': ["trondheim", "london", "boston", "melbourne", "amsterdam", "helsinki", "tokyo", "toronto", "saopaulo", "moscow", "zurich", "paris", "bangkok", "budapest", "austin", "berlin", "ottawa", "phoenix", "goa", "amman", "nairobi", "manila"], 'val': ["cph", "sf"], 'test': ["miami", "athens", "buenosaires", "stockholm", "bengaluru", "kampala"] To get the results on the val set, you need to install https://github.com/mapillary/mapillary_sls and run evaluate.py passing in the predictions produced by Patch-NetVLAD. To get the results on the test set, you'll need to do some investigation to find where the latest challenge submission portal is located now. Take a look at: https://sites.google.com/view/ltvl2021/challenges (Mapillary does not release ground truth data for the test set, you will need to find the latest submission portal, or email the Mapillary team)