mihaidusmanu / d2-net

D2-Net: A Trainable CNN for Joint Description and Detection of Local Features
Other
761 stars 163 forks source link

How to manually give the correspondences to train the network #72

Closed Aniket-Gujarathi closed 3 years ago

Aniket-Gujarathi commented 3 years ago

Hi , I would like to know how can I manually give the correspondences to train the network, instead of giving pose, depth, rgb and intrinsics. It would be much convenient for me train that way in my project.

  1. Could you point me towards the code where we could provide direct correspondences.
  2. I have looked into the warp function in the loss.py. And it looks like it is returning the pixel locations for corresponding pixels as pos1 and pos2. But I am not sure of ids tensor, is it returning the descriptors' index for pos1 in row major order? Could we replace this warping function with our input data.
mihaidusmanu commented 3 years ago

In theory you can replace the function warp with your correspondences.

The ids are used to filter out points / descriptors that do not have correspondences. As you stated, it is simply the row-major index of valid points of the grid. https://github.com/mihaidusmanu/d2-net/blob/2a4d88fbe84961a3a17c46adb6d16a94b87020c5/lib/loss.py#L70-L72

Aniket-Gujarathi commented 3 years ago

Hi, Thank you for the clarification.