Closed InzamamAnwar closed 5 years ago
Hi, the network estimates disparity values at training/test time. You can easily convert such values in real world depth exploiting the intrinsic parameters provided by the KITTI dataset. In "utils/evaluation_utils.py" you can find the "convert_disps_to_depths_kitti" function for this purpose.
Thank you for your reply. If the test image does not belong to KITTI, we will use this function (intrinsic parameter) for real world depth estimation.
However, keep in mind that such conversion is meaningful if both the camera and the scene are the same at training and testing time. In a totally different scenario (i.e. indoor scene) you can't recover the exact scale factor for the real world depth conversion because the ill-posed nature of the monocular depth estimation task. In such case, the network gives you only a relative depth information about the context.
HI!
I have modified the function, given below to get depth in real world.
def get_depth(pred_disparity):
height, width = pred_disparity.shape
pred_disparity = pred_disparity * pred_disparity.shape[1] / width
pred_depth = width_to_focal[width] * 0.54 / pred_disparity
return pred_depth
I have two questions regarding this
width_to_focal
?Hi! I try to answer your questions:
Hi! First of all thank you for sharing your findings. I ran the single image test and got output in "npy" format. Can you please provide details about the scale of the output and whether it give real world meter or not? If not then how can the output "npy" can be scaled to get real world meters.
Thanks!