LiheYoung / Depth-Anything

[CVPR 2024] Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data. Foundation Model for Monocular Depth Estimation
https://depth-anything.github.io
Apache License 2.0
6.82k stars 523 forks source link

Distortion problem of point cloud map #186

Open APPZ99 opened 3 months ago

APPZ99 commented 3 months ago

Hi, thanks for your great work!

I encountered some problems when converting the depth map to a point cloud. I used Depth anything to predict the depth of images from the KITTI360 dataset, and when I tried to convert it to a point cloud, I found that it confused me in some details.

The predicted image and its depth map are shown below:

Screenshot from 2024-06-08 11-08-26 Screenshot from 2024-06-08 11-09-03

The converted point cloud is shown below:

Screenshot from 2024-06-08 11-09-33 Screenshot from 2024-06-08 11-09-48

From the top view, we can see that the road is funnel-shaped, and the road signs also show that they are seriously distorted. The point cloud as a whole shows a distortion that converges towards the center of the camera.

I would like to ask what is the cause of these problems? Maybe the depth map is too dense? Or is it a problem with my camera model? Below is the conversion code I used (pinhole camera model).

image_path = './conver_100.png'

depth_image = Image.open(image_path).convert("I")
depth_np = np.asarray(depth_image, dtype=np.uint16)
depth_np = (depth_np - np.min(depth_np)) / (np.max(depth_np) - np.min(depth_np))

def depth_to_point_cloud(depth_map, intrinsic):
    h, w = depth_map.shape
    i, j = np.meshgrid(np.arange(w), np.arange(h), indexing='xy')

    x = (i - intrinsic[0, 2]) * depth_map / intrinsic[0, 0]
    y = (j - intrinsic[1, 2]) * depth_map / intrinsic[1, 1]
    x = x.reshape(-1)
    y = y.reshape(-1)
    z = depth_map.reshape(-1)

    points = np.stack((x, y, z), axis=-1)
    return points

def visualize_point_cloud(points):
    pcd = o3d.geometry.PointCloud()
    pcd.points = o3d.utility.Vector3dVector(points)
    o3d.visualization.draw_geometries([pcd])

fx = 552.554261  
fy = 552.554261 
cx = 682.049453  
cy = 238.769549  

intrinsic = np.array([
    [fx, 0, cx],
    [0, fy, cy],
    [0, 0, 1]
])

points = depth_to_point_cloud(depth_np, intrinsic)
visualize_point_cloud(points)

The point cloud below is what I got with the same code after getting the depth map from the LiDAR point cloud. It can be seen that its geometric performance is relatively good, without obvious distortion

image

dsfcdsfdg commented 1 month ago

I have the same question. Did you solve this distortion?