mileyan / pseudo_lidar

(CVPR 2019) Pseudo-LiDAR from Visual Depth Estimation: Bridging the Gap in 3D Object Detection for Autonomous Driving
https://mileyan.github.io/pseudo_lidar/
MIT License
976 stars 216 forks source link

Camera Calibration for generating Pseudo Lidar Image from Depth Image #37

Open RGH-NitinVijay opened 4 years ago

RGH-NitinVijay commented 4 years ago

Hello,

Firstly, thank you for the amazing work with this repo.

I have a custom image from a monocular camera for which I generated the depth map using the following repo: https://github.com/nianticlabs/monodepth2

The depth results look good, but now I'm trying to generate the pseudo lidar points for that image. I understand I need camera calibration parameters for each image so I calibrated my camera by using the checkered board calibration technique. Based on the KITTI and the NYU datasets, I observed that we need the calibration file for every single image. Is my assumption correct? If yes, I'm not sure how we're generating such a calibration file for every single image? My understanding was I would calibrate the camera once and generate one set of relevant calibration parameters for that camera. I was then planning to use them along with the previously obtained depth image for generating the Psuedo Lidar Image.

Let me know if my understanding is not correct. Thank you.

RGH-NitinVijay commented 4 years ago

Hello,

Appreciate it if anyone can provide their thoughts on my questions above. Thank you!

mileyan commented 4 years ago

Hi @RGH-NitinVijay , in my understanding, if you use your monocular images to do self-supervised learning, you don't have to generate the calibrations for every frame. You can just use the camera parameters to genera point cloud in the camera coordinate. However, If you want to project the depth map to world coordinate, you will need to transformation matrix camera_to_world, which needs to be estimated frame by frame.

Reference: http://www.cse.psu.edu/~rtc12/CSE486/lecture12.pdf