Open deepak242424 opened 4 years ago
Can you please share the links for pretrained models of your experiments in the paper?
The pretrained model has been uploaded. You can get it from 'https://github.com/gpcv-liujin/REDNet/blob/master/MODEL_FOLDER/MODEL_FOLDER.zip'
Thanks. I didn't notice that earlier.
I want to ask one more question, in your paper you showed the colmap results. Did you use ground truth extrinsic/instrinsic parameters to generate point clouds for WHU data?
And also if you can tell how are you visualizing the depth maps from COLMAP.
Many thanks.
Thanks. I didn't notice that earlier.
I want to ask one more question, in your paper you showed the colmap results. Did you use ground truth extrinsic/instrinsic parameters to generate point clouds for WHU data?
And also if you can tell how are you visualizing the depth maps from COLMAP.
Many thanks.
Sorry I just noticed your reply.
I used ground truth camera parameters as input, so that the depth results obtained by colmap can be compared with the provided ground truth. The output depth maps are stored in the directory of "dense/stereo/depth_maps/" in the colmap project. The depth map results are stored in "bin" format, so I reloaded the "bin" file and visualized it using python.
thanks for your response.
Can you please share the links for pretrained models of your experiments in the paper?