stschubert / VPR_Tutorial

GNU General Public License v3.0
160 stars 19 forks source link

About the version of the code #2

Closed MAX-OTW closed 1 year ago

MAX-OTW commented 1 year ago

Hi, @stschubert, this is a very good job and a good learning material for beginners.Congratulations! But I also have a suggestion. Is there a Pytorch version for this job? If not, is there any plan to supplement the pyorch version code? Meanwhile,I think "List of existing open-source implementations for VPR" is a module that can be constantly updated, which makes this work perfect:)

I am always looking forward to your kind response. Best regards.

stschubert commented 1 year ago

Hi @MAX-OTW,

Thank you very much for your kind comment and your suggestion :-)

Is there a Pytorch version for this job? If not, is there any plan to supplement the pyorch version code?

We are planning to add code soon that does not depend on Tensorflow, but possibly on PyTorch. However, since the only part of the code that depends on Tensorflow is the local DELF descriptor (before the conversion into a holistic descriptor using HDC), we maybe add alternative descriptor(s) instead of a PyTorch-version of the DELF descriptor. This allows everyone to use their preferred library and different descriptors :-)

Meanwhile,I think "List of existing open-source implementations for VPR" is a module that can be constantly updated, which makes this work perfect:)

The "List of existing open-source implementations for VPR" is actually intended to be continuously updated. Here, we are also looking forward to contributions from the community :-)

Best wishes, Stefan

stschubert commented 1 year ago

I have now implemented the holistic AlexNet-conv3 descriptor from [1] with PyTorch, so you could switch from the holistic HDC-DELF descriptor which uses Tensorflow.

[1] N. Sünderhauf, S. Shirazi, F. Dayoub, B. Upcroft, and M. Milford, “On the performance of convnet features for place recognition,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2015, pp. 4297–4304.

MAX-OTW commented 12 months ago

@stschubert @Tobias-Fischer @oravus ,thank you so much for your great work! I have another question, how do I create GThard.npy and GTsoft.npy for the various datasets in “VPR-Bench”?Could you please add a guide? Especially GThard.npy for datasets like Pittsburgh and Tokyo 24/7, I don't know how to create it. Do you have any good suggestions?

I am always looking forward to your kind response. Best regards.

stschubert commented 12 months ago

@MAX-OTW, Thanks for your question. I’m currently on vacation without a notebook (writing with my cell phone), so I cannot give you a detailed answer to your question.

The creation strongly depends on the individual datasets. As far as I remember, some datasets in VPR-Bench have hand labelled image pairs, s.t. GThard is a main diagonal. Others probably provide GPS or pose data, s.t. GThard has to be created from distance thresholds which are chosen manually using knowledge about the dataset. GTsoft then also depends on your knowledge. Is there no visual overlap between consecutive images, GTsoft simply equals GThard. If there is an overlap, you either have to dilate GThard based on the visual overlap until all image pairs with small visual overlap are included (typically for datasets with hand labelled ground truth) or you have to choose higher distance thresholds that include all image pairs with small visual overlap (typically for datasets with GPS or poses as ground truth).

As I said, I would have to look into the VPR bench for a detailed answer, but this is currently difficult for me.

Best, Stefan