I encountered some problems when converting the depth map to a point cloud. I used Depth anything to predict the depth of images from the KITTI360 dataset, and when I tried to convert it to a point cloud, I found that it confused me in some details.
The predicted image and its depth map are shown below:
The converted point cloud is shown below:
From the top view, we can see that the road is funnel-shaped, and the road signs also show that they are seriously distorted. The point cloud as a whole shows a distortion that converges towards the center of the camera.
I would like to ask what is the cause of these problems? Maybe the depth map is too dense? Or is it a problem with my camera model? Below is the conversion code I used (pinhole camera model).
The point cloud below is what I got with the same code after getting the depth map from the LiDAR point cloud. It can be seen that its geometric performance is relatively good, without obvious distortion
Hi, thanks for your great work!
I encountered some problems when converting the depth map to a point cloud. I used Depth anything to predict the depth of images from the KITTI360 dataset, and when I tried to convert it to a point cloud, I found that it confused me in some details.
The predicted image and its depth map are shown below:
The converted point cloud is shown below:
From the top view, we can see that the road is funnel-shaped, and the road signs also show that they are seriously distorted. The point cloud as a whole shows a distortion that converges towards the center of the camera.
I would like to ask what is the cause of these problems? Maybe the depth map is too dense? Or is it a problem with my camera model? Below is the conversion code I used (pinhole camera model).
The point cloud below is what I got with the same code after getting the depth map from the LiDAR point cloud. It can be seen that its geometric performance is relatively good, without obvious distortion