You can compute depth from disparity by "depth = b * f / disparity". Just using "depth = f / disparity" is ok for the synthetic dataset since the baseline in Blender is set to 1.0. But how is the KITTI baseline incorporated into the KITTI depth computation? Are the disparity images prescaled for a baseline of 1 metre? Or is this somehow part of the DEPTH_SCALE (which is 0.1 here)? Also, why is the disparity taken from the GA-Net and not from the original dataset?
The depth computation for the KITTI dataset: https://github.com/princeton-vl/RAFT-3D/blob/877eb806cd0261ec828c684b41ecdb2490dccbb3/scripts/kitti_submission.py#L74-L75 is somehow missing the baseline (0.54 according to http://www.cvlibs.net/datasets/kitti/setup.php).
You can compute depth from disparity by "depth = b * f / disparity". Just using "depth = f / disparity" is ok for the synthetic dataset since the baseline in Blender is set to 1.0. But how is the KITTI baseline incorporated into the KITTI depth computation? Are the disparity images prescaled for a baseline of 1 metre? Or is this somehow part of the
DEPTH_SCALE
(which is 0.1 here)? Also, why is the disparity taken from the GA-Net and not from the original dataset?