isl-org / MiDaS

Code for robust monocular depth estimation described in "Ranftl et. al., Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer, TPAMI 2022"
MIT License
4.43k stars 619 forks source link

Some Questions about the calculating the disparity map using optical flow and a pre-trained semantic segmentation model #144

Open women1995 opened 2 years ago

women1995 commented 2 years ago

Thank you for your excellent work! Your idea is innovative to me. Thanks for your code. I would like to use the code to generate my own training data. I have some questions about it as follows: ①I'm not sure that how to combine the disparity map and semantic segmentation results when training, which does not seems to be mentioned in get_disp_and_uncertainty.py (i.e., https://github.com/lasinger/3DVideos2Stereo/blob/master/get_disp_and_uncertainty.py). ②The paper described that ". In a final step, we detect pixels that belong to sky regions using a pre-trained semantic segmentation model and set their disparity to the minimum disparity in the image". For this, I'm not sure the disparity of sky regions is the minimum disparity in the image , or the minimum disparity in the image is set as the disparity of sky regions. Thank you for your kind consideration of these questions. Best regards.

wch1996 commented 2 years ago

Hello,did you konw how to get the disparity by optical flow