QVPR / VPRSNN

Code for our IEEE RAL + ICRA 2022 paper "Spiking Neural Networks for Visual Place Recognition via Weighted Neuronal Assignments", and our ICRA2023 paper "Ensembles of Compact, Region-specific & Regularized Spiking Neural Networks for Scalable Place Recognition"
MIT License
40 stars 6 forks source link

Reproducibility issue persists #16

Closed bobhagen closed 2 months ago

bobhagen commented 2 months ago

Hi @Somayeh-h,

I am opening a new issue in case you did not receive a notification for the reply here. It would be great if you could take a look at it. Your reply only addresses the Nordland dataset. It is still not possible to reproduce the results you published in your papers for the other datasets, since I could not regenerate the data you used.

Best, Robert

Somayeh-h commented 2 months ago

Hi @bobhagen,

Thanks for reaching out. Here are my comments in regards to answering to your questions:

Synthia Dataset: The data shared by Mubariz in here indeed corresponds to the version we utilized. The "ref" and "query" folders contain the images we sampled for our method. The "ref_new" and "query_new" folders have the same images with filenames having 6 leading zeros. In the image names text files we have provided, the filenames have 3 leading zeros.

Our method uses the image names text file to read the specified 275 images from each folder, "ref" and "query", which have 911 and 813 images respectively. Our method necessitates sampling due to its sensitivity to viewpoint shifts, resulting in 275 images per traverse as specified in the image names text file. To load the dataset correctly, ensure the image names in the folders match with the text file provided.

SPEDTest and St. Lucia Datasets: Yes, the SPEDTest dataset shared by Mubariz is the same as the versions we used in our first paper, but we did not use that dataset in our arxiv paper which you mentioned. As for the St. Lucia dataset, the version shared by Mubariz is temporarily unavailable, but you can access a newer collection of the dataset here, ensuring alignment between reference and query images.

SFU Mountain Dataset: You can find the dataset on the website created by the dataset authors here.

ORC Dataset: The ORC dataset used in our work originates from the original Oxford RobotCar dataset. The preprocessing done to align the images between traverses were done previously by a former colleague, and I unfortunately cannot share that aligned version. But, you still can align the reference and query images using the ground truth information provided by the dataset. The ORC dataset also provides ground truth information, which should help with sampling the images approximately every 10 meters. Of course, if you are using these methods for comparisons, then as long as the same form of the dataset is used consistently for all methods and the required pre-processing are done, then that should be sufficient.

I understand your concern about replicating our results accurately. I unfortunately cannot share data which I do not own. We're committed to transparency and replicability, and I appreciate your effort in understanding our methods. If you still encounter any further difficulties, please don't hesitate to reach out to the corresponding author of the papers.

Best regards, Somayeh