mihaidusmanu / d2-net

D2-Net: A Trainable CNN for Joint Description and Detection of Local Features
Other
761 stars 163 forks source link

Details about how to find correspondences in megadepth dataset #89

Closed gleefe1995 closed 2 years ago

gleefe1995 commented 2 years ago

Your paper was informative. Thank you. I want to know details about how to find correspondences in megadepth dataset between two image pairs. This is how I understood about finding correspondences : Using sparse SfM point cloud, all 3d points seen(or made?) by seoncd image are projected into first image, and find correspondences with 2d points that all 3d points seen by first image are projected into first image. Then, we find SIFT correspondences. Am I right? However, in Issue #45, you said that we get dense correspondences using MVS depth maps. In this case, are all dense 3d points seen by second image projected into first image? If we consider all dense 3d points in the dense depth map, I think some 3d points are not needed. ex) too far away from first image Are 3d points that are not needed deleted while depth-check? How can we deleted them? like scale ratio? Can you explain to me the details? Thank you.