mihaidusmanu / d2-net

D2-Net: A Trainable CNN for Joint Description and Detection of Local Features
Other
782 stars 164 forks source link

Preprocessed data #26

Closed phseo closed 4 years ago

phseo commented 4 years ago

While I had another issue open already, I am writing a new one as it is on completely different topic.

In the scene info generation during the preprocessing pipeline, I see that you are using the images obtained from the undistortion. Why don't you use the images in MegaDepth? As they match with their corresponding depth maps, they should be also undistorted. The number of images in the dataset is smaller than the number of all undistorted images as the dataset only contains the ones that have a depth map. Is there any special reason for this?

Thanks!

mihaidusmanu commented 4 years ago

You can use either the images released with MegaDepth or the new ones after re-running undistortion. There should not be any noticeable difference. However, the undistortion step for MegaDepth is still necessary since the authors of MegaDepth did not release the camera intrinsics for undistorted images.

phseo commented 4 years ago

Yes, I am still running the undistortion and your answer confirms my thoughts! Thank you very much.