heethesh / lidar_camera_calibration

Light-weight camera LiDAR calibration package for ROS using OpenCV and PCL (PnP + LM optimization)
BSD 3-Clause "New" or "Revised" License
528 stars 117 forks source link

Question(s) Regarding Where/How to Get Matrix of Projection from Point Cloud to Images Plane #23

Open Bsting118 opened 3 years ago

Bsting118 commented 3 years ago

I am currently trying to get a matrix corresponding to the RGB image but with one channel for every Lidar property. The process is called projection from point clouds to images plane; which you already know is used in this calibration method. I want to retrieve this matrix from the projection process and hopefully store the outputted data to a text file or something for later use.

I looked in your python scripts and saw that in the calibrate_camera_lidar.py script on lines 354-415 that you actually have outputted the projection process. I was wondering if maybe I could print out that data using a python print statement or something along those lines to get that data for the matrix. I believe the data from the projection is stored in points2D, but please confirm me on this. It also seems points2D is a numpy array as well so I could set some print options if printing out is the way to go for this issue.

I also looked at your comments in your script and it reads that the projection outputs are published to the camera_lidar topic. I am unsure on what approach to take to get all the projection outputs from this ROS topic. Additionally, if I was using real time or live data, how would I get the projection outputs since there would be no bag file?

There is also the img_corners.npy and the pcl_corners.npy files but I believe they only hold the image and Lidar points, not the projection point cloud to image plane points/output I am looking for which is for the matrix. Please also confirm me on this.

So, if you can, please help provide some additional insight on how to get a matrix corresponding to the RGB image with one channel for every Lidar property using the projection of point cloud to image plane process. I am also steering towards using real time data by using the camera and Lidar publishers so I am not really using a bag file at this point(just fyi).

Thank you for any feedback or help you provide. You have already helped me out a lot and I would just like to recognize that.

heethesh commented 3 years ago

I am not exactly sure which matrix you are referring to, I believe you just want a blank image with all the LiDAR points projected to the image space for each individual LiDAR channel. Yes, you are right in pointing out the lines of code that does that, all you need to do there is zero out the actual image pixels by img[:, :, :] = 0 before drawing the LiDAR points, such that only the projected points are visible.

Note that 3D points behind the image plane (or camera center) also get projected to image plane which I already filter out (points with z-coord < 0).

# Filter points in front of camera
inrange = np.where((points3D[:, 2] > 0) &
...

The points are already color-mapped with the intensity channel on line 394. Just replace this last index here points3D[:, -1], that is -1 with 0 or 1 or 2 for x, y, z (aka depth) colormaps respectively. You may also switch to any other matplotlib colormap instead of jet which is used here.

There is not really any difference in running this script with live data vs with ROS bag. Playing a ROS bag just publishes the topics as if it were published live, just make sure that you subscribe to all the correct topics (names/message types...) and the calibration (camera intrinsic matrix and extrinsic transformation matrix) is good for the results to make sense.

To save your data (img variable), just use cv2.imwrite to save as an image or np.save for the arrays, instead of printing them to console. You may also use scipy.io.savemat to save Numpy arrays and load them in MATLAB, if you use it.

Bsting118 commented 3 years ago

So, if I wanted to store and save the data from the points2D and points3D, I can just use cv2.imwrite or the numpy np.save to a variable or file?

Also, the projection portion of the python script also seems to write to the pcl_corners; could the pcl_corners and img_corners be used as proper projection (from point cloud to image plane) data?

heethesh commented 3 years ago

So, if I wanted to store and save the data from the points2D and points3D, I can just use cv2.imwrite or the numpy np.save to a variable or file?

Use only np.save here.

Also, the projection portion of the python script also seems to write to the pcl_corners; could the pcl_corners and img_corners be used as proper projection (from point cloud to image plane) data?

No, don't use these. These are the manually picked points by the user, they have no projection information.

Bsting118 commented 3 years ago

Is it also possible to get grayscale image outputs/visualizations from the sensor fusion/calibration?

Bsting118 commented 3 years ago

I have another question: where are the camera intrinsic parameters located in the workspace? I am trying to fuse your data structure with a potential MatLab script to gain projection matrix data found here: https://www.mathworks.com/help/lidar/ref/projectlidarpointsonimage.html .

heethesh commented 3 years ago

See this in README https://github.com/heethesh/lidar_camera_calibration/blob/master/scripts/calibrate_camera_lidar.py#L436