Closed kalanityL closed 6 years ago
Answering to myself if anyone has the same question (thanks to https://github.com/JiaRenChang/PSMNet/issues/56#issuecomment-398021563)
1/ download toolkit for sceneflow here: http://www.cvlibs.net/datasets/kitti/eval_scene_flow.php?benchmark=stereo
2/ you have ot use the disp_to_color.m file.
Assuming you have a file test.png in the same folder, this simple code is working:
disp('======= KITTI 2015 Benchmark Demo =======');
D_test = disp_read('test.png');
D_test_color=disp_to_color(D_test,192);
imwrite (D_test_color, "test_color.png")
test.png:
test_color.png
Thank you very much for your sharing.
Do you know how to evaluate the disparity ? Could you share that with us? Thank you so much
What do you mean by "evaluate the disparity" ?
I have rewritten the disp_to_color.m in Python, welcome issues for somebody else, I will try to make the script running fast.
https://github.com/passion3394/PSMNet_CPU/blob/master/disp_to_color.py
@kalanityL Thanks for sharing! May I ask how you achieved this super smooth disparity map ("test.png" from https://github.com/JiaRenChang/PSMNet/issues/59#issuecomment-398029585)? From which KITTI dataset is this disparity map (2012, 2015, raw)? How was it preprocessed? Your answer would be very helpful to my current project! Thanks in advance!
Hello, I also bumped into the color representation of disparity after reading a paper on this topic. Can somebody, please, clarify what each channel from the RGB representation tells? Or why would we need an RGB representation for disparity since it is a scalar value?
@manuelmaior29 The way I see it, the color representation of the disparity map is only for presentation purposes to make it easier for humans to recognize the structure/value distribution in the map. Hence, for computational applications you only use the "default" disparity map.
@GoHeFa This makes it a bit clearer then. Thank you for your answer!
When I run the submission.py on Kitti 2012, I get the disparity as output. Is it the expected output from the submission.py run?
python submission.py --maxdisp 192 --model stackhourglass --KITTI 2012 --datapath DATA/Kitti2012/testing/ --loadmodel models/pretrained/pretrained_model_KITTI2015.tar
I was thinking I would get a depth map of this kind (not the same image, just for example):
I am a bit confused.
I understand the formula depth = baseline * focal / disparity Is that what I should implement to generate the depth from the output of submission.py ?
Or is there some paramater to give to generate it directly from the run of submission?
I checked Kitti development kit and didnt find it helpful.
Any help would be welcome.
Thank you (And thank you anyway for the great implementation - great work)