shariqfarooq123 / AdaBins

Official implementation of Adabins: Depth Estimation using adaptive bins
GNU General Public License v3.0
725 stars 156 forks source link

Inference and visualization #59

Open MarcusSchilling opened 2 years ago

MarcusSchilling commented 2 years ago

Hello,

I used the Pre-trained AdaBins on KITTI in the following. Then i cropped the rectified KITTI-360 images to the resolution that the AdaBins network uses (only horizontal cropping) and performed Depth Prediction. My question now is whether or not the output is a reasonable result. Is it normal that the depth image has in the upper part a ceiling. Has this something to do with the LiDAR that only has an angle up to 10 Degrees tot the top in the vertical axes and therefore the training had only labels for lower parts of the image.

The visualization was made with the following code:

camera_intrinsic = open3d.camera.PinholeCameraIntrinsic(
    open3d.camera.PinholeCameraIntrinsicParameters.PrimeSenseDefault
)
camera_intrinsic.intrinsic_matrix = load_intrinsic_camera()
print(load_intrinsic_camera())
print(load_extrinsic_cam_coordinates_kitti360())

depthImage = open3d.io.read_image("/path_to_depth_prediction/2013_05_28_drive_0000_sync/0000010767.png")
extrinsic_cam_to_velo = load_extrinsic_cam_coordinates_kitti360()
point_cloud = open3d.geometry.PointCloud.create_from_depth_image(depthImage,camera_intrinsic, extrinsic_cam_to_velo)
open3d.visualization.draw_geometries([point_cloud])

Here are the Intrinsic Camera Coordinates: [[552.554261 0. 682.049453] [ 0. 552.554261 238.769549] [ 0. 0. 1. ]] And the Extrinsic Camera Coordinates: [[ 0.04307104 -0.08829286 0.99516293 0.80439144] [-0.99900437 0.00778461 0.04392797 0.29934896] [-0.01162549 -0.99606414 -0.08786967 -0.17702258] [ 0. 0. 0. 1. ]]

I would be very thankful for a respond ScreenCapture_2022-03-04-11-46-21