I read disparity with opencv(cv2.IMREAD_ANYDEPTH) and then apply (float(depth[y][x]) -1.) /256. to get the depth z, then use intrinsic parameters from json file directly (without any scaling):
X = int((x - u0) / fx * z)Y = int((y - v0) / fy * z)
but I got a strange point cloud which does not looks like a stereo street view at all. Did I use the wrong approach?
Hi,
I read disparity with opencv(cv2.IMREAD_ANYDEPTH) and then apply
(float(depth[y][x]) -1.) /256.
to get the depthz,
then use intrinsic parameters from json file directly (without any scaling):X = int((x - u0) / fx * z)
Y = int((y - v0) / fy * z)
but I got a strange point cloud which does not looks like a stereo street view at all. Did I use the wrong approach?
thx!