SysCV / shift-dev

SHIFT Dataset DevKit - CVPR2022
https://www.vis.xyz/shift
MIT License
103 stars 10 forks source link

The parameters to convert depth to disparity #1

Closed xiaoxTM closed 2 years ago

xiaoxTM commented 2 years ago

Hi all: Thank you for the great job of making such great dataset. Right now, I try to use this dataset for depth estimation by stereo matching, however, though there are depth maps for each images available, the parameters for converting depth to disparity (i.e., fx, fy, cx, cy, baseline) are hard to figure out. Is there any document or homepage that describe the parameters ?

thank you

suniique commented 2 years ago

Hey @xiaoxTM, thanks for your interest in our work!

As a general response, we have listed the detailed position of each camera in config/sensors.yaml, where location denotes the offset of x, y and z axes in meters from the ego vehicle's origin. The relative pose of cameras can be read from this file, e.g., the baseline of the stereo pair (gap between front and left_stereo camera) is 0.2m.

Next, regarding the intrinsic parameters, they are focal_x = focal_y = 640, (center_x, center_y) = (640, 400). Note that the focal length is computed using focal_x = focal_y = width / (2 * tan(FoV * np.pi / 360.0)), which is 640 in our case. All RGB cameras share the same intrinsics.

We will add this information to our website; thank you!