mihaidusmanu / d2-net

D2-Net: A Trainable CNN for Joint Description and Detection of Local Features
Other
761 stars 163 forks source link

Undistorted and preprocessed images for our dataset #95

Open Anuradha-Uggi opened 1 year ago

Anuradha-Uggi commented 1 year ago

Hi, kindly try to clear the below points:

  1. In the paper, it seems all loss functions are between CNN feature maps. Where exactly are you using ground truth sfm correspondences?
  2. Megadepth dataset and sfm models shared are of a heavy size and failed to download. To train D2Net, undistorted images and preprocessed images are required. I would be glad if you could spare a little time and share the information about generating the above for our dataset. Our dataset includes RGB-thermal frames; the target is to train D2Net for RGB-thermal image matching.

I am a research scholar working on image-matching algorithms. I hardly found any RGB-thermal matching baseline papers for my research, among which no one has released codes except you. It would be helpful if you share the above details. Thank you!

2970765122 commented 8 months ago

Hi, kindly try to clear the below points:

  1. In the paper, it seems all loss functions are between CNN feature maps. Where exactly are you using ground truth sfm correspondences?
  2. Megadepth dataset and sfm models shared are of a heavy size and failed to download. To train D2Net, undistorted images and preprocessed images are required. I would be glad if you could spare a little time and share the information about generating the above for our dataset. Our dataset includes RGB-thermal frames; the target is to train D2Net for RGB-thermal image matching.

I am a research scholar working on image-matching algorithms. I hardly found any RGB-thermal matching baseline papers for my research, among which no one has released codes except you. It would be helpful if you share the above details. Thank you!

Hello, did you solve your problem successfully?