mileyan / pseudo_lidar

(CVPR 2019) Pseudo-LiDAR from Visual Depth Estimation: Bridging the Gap in 3D Object Detection for Autonomous Driving
https://mileyan.github.io/pseudo_lidar/
MIT License
976 stars 216 forks source link

In the mono image, how to transform depth to point cloud ? #15

Open bright0072876 opened 5 years ago

bright0072876 commented 5 years ago

In the monocular image, how to get the point cloud from depth image?

mileyan commented 5 years ago

I use the pre-trained DORN model. You can download it from https://github.com/hufu6371/DORN .

bright0072876 commented 5 years ago

DORN just make the first step depth estimation from RGB image to RGB-depth, but not provide generate point cloud.

mileyan commented 5 years ago

You can use my code to convert disparity to point clouds. https://github.com/mileyan/pseudo_lidar#convert-the-disparities-to-point-clouds

mileyan commented 5 years ago

Update: Please add --is_depth in the command.

DeriZSY commented 4 years ago

Hi, do we need to do any processing before using the depth generated by DORN to generate pointcloud?

I use the depth generated with DORN pretrain model, using code here: https://github.com/hufu6371/DORN/blob/master/demo_kitti.py .

Juding from the code, the depth is saved to .png, and the result looks well.

depth = depth_prediction(args.filename) depth = depth*256.0 depth = depth.astype(np.uint16) img_id = args.filename.split('/') img_id = img_id[len(img_id)-1] img_id = img_id[0:len(img_id)-4] if not os.path.exists(args.outputroot): os.makedirs(args.outputroot) cv2.imwrite(str(args.outputroot + '/' + img_id + '_pred.png'), depth)

0000000013_depth_pred

However, the point cloud generated with the provided code is obviously wrong. Do I need to do some preprocessing with the depth (for example divided by 256) to use it?

DeriZSY commented 4 years ago

I solved the problem described above and successfully generated valid pointcloud with depth generated by DORN. Some tips:

  1. You must use the caffe provided in the DORN repository instead of any latest versions. Otherwise, you may encounter the problem of error when loading model prototxt.
  2. If you choose to generate depth by modifying the kitti demo code (which I think should be the most convenient way), you need to adjust the data type of the depth as indicated in devkit of KITTI Depth by simply adding: depth = disp_map.astype(np.float) / 256 at here before project depth to point cloud.
mileyan commented 4 years ago

Thanks so much. I have update the code.

bright0072876 commented 4 years ago

Hi, DeriZsy. Just using the depth image to generate the point clouds or need to predict the disparities first? I have already using the DORN caffe version demo code generate the depth image.

DeriZSY commented 4 years ago

Hi, DeriZsy. Just using the depth image to generate the point clouds or need to predict the disparities first? I have already using the DORN caffe version demo code generate the depth image.

Use the depth directly. Notice the 'is-depth' flag here in the code for lidar generation in this repo.

bright0072876 commented 4 years ago

First move the depth image to predict_disparity folder?

DeriZSY commented 4 years ago

First move the depth image to predict_disparity folder?

plz read the code yourself... then you got all the answers... brian is a good thing

bright0072876 commented 4 years ago

when the mono depth image generate the point clouds, each need the camera calibration file. If using common other image out of KITTI there is no camera calibration file, so cannot generate to the point clouds.

mileyan commented 4 years ago

when the mono depth image generate the point clouds, each need the camera calibration file. If using common other image out of KITTI there is no camera calibration file, so cannot generate to the point clouds.

Yes, you need calibration parameters when you generate the point cloud.