JiaRenChang / PSMNet

Pyramid Stereo Matching Network (CVPR2018)
MIT License
1.43k stars 424 forks source link

how to get depth image from disparity map #59

Closed kalanityL closed 6 years ago

kalanityL commented 6 years ago

When I run the submission.py on Kitti 2012, I get the disparity as output. Is it the expected output from the submission.py run?

python submission.py --maxdisp 192 --model stackhourglass --KITTI 2012 --datapath DATA/Kitti2012/testing/ --loadmodel models/pretrained/pretrained_model_KITTI2015.tar

test

I was thinking I would get a depth map of this kind (not the same image, just for example):

img

I am a bit confused.

I understand the formula depth = baseline * focal / disparity Is that what I should implement to generate the depth from the output of submission.py ?

Or is there some paramater to give to generate it directly from the run of submission?

I checked Kitti development kit and didnt find it helpful.

Any help would be welcome.

Thank you (And thank you anyway for the great implementation - great work)

kalanityL commented 6 years ago

Answering to myself if anyone has the same question (thanks to https://github.com/JiaRenChang/PSMNet/issues/56#issuecomment-398021563)

1/ download toolkit for sceneflow here: http://www.cvlibs.net/datasets/kitti/eval_scene_flow.php?benchmark=stereo

2/ you have ot use the disp_to_color.m file.

Assuming you have a file test.png in the same folder, this simple code is working:

disp('======= KITTI 2015 Benchmark Demo =======');

D_test = disp_read('test.png');
D_test_color=disp_to_color(D_test,192);
imwrite (D_test_color, "test_color.png")

test.png: test

test_color.png toto

hurricane2018 commented 6 years ago

Thank you very much for your sharing.

hurricane2018 commented 6 years ago

Do you know how to evaluate the disparity ? Could you share that with us? Thank you so much

kalanityL commented 6 years ago

What do you mean by "evaluate the disparity" ?

passion3394 commented 4 years ago

I have rewritten the disp_to_color.m in Python, welcome issues for somebody else, I will try to make the script running fast.

https://github.com/passion3394/PSMNet_CPU/blob/master/disp_to_color.py

GoHeFa commented 2 years ago

@kalanityL Thanks for sharing! May I ask how you achieved this super smooth disparity map ("test.png" from https://github.com/JiaRenChang/PSMNet/issues/59#issuecomment-398029585)? From which KITTI dataset is this disparity map (2012, 2015, raw)? How was it preprocessed? Your answer would be very helpful to my current project! Thanks in advance!

manuelmaior29 commented 2 years ago

Hello, I also bumped into the color representation of disparity after reading a paper on this topic. Can somebody, please, clarify what each channel from the RGB representation tells? Or why would we need an RGB representation for disparity since it is a scalar value?

GoHeFa commented 2 years ago

@manuelmaior29 The way I see it, the color representation of the disparity map is only for presentation purposes to make it easier for humans to recognize the structure/value distribution in the map. Hence, for computational applications you only use the "default" disparity map.

manuelmaior29 commented 2 years ago

@GoHeFa This makes it a bit clearer then. Thank you for your answer!