Open schreiterjp opened 1 year ago
Thanks for your request. It would be a useful feature, and I may work on this later in a spare time.
Thanks for your reply.
Thought a bit about it again. I mean as you already have the viewer function using the calibration matrix to project the pointcloud to the image, this is maybe the easiest way to build a ros2 node from. At the end a user would have a real time visualisation like in the viewer but then displayed in rviz2 for example.
The calibration matrix projects an image point into the lidar 3d point right?
With the inverted matrix I should be able to project the 3d lidar point into a 2d image if I understood it right.
For a first test I though about using open cv with python, specifically the projectPoints function.
Yes, T_lidar_camera
in the calibration result file is the transformation that brings a point in the camera frame to the LiDAR frame (i.e., pt_lidar = T_lidar_camera * pt_camera
).
Hi @koide3,
tried to do the projection using cv2.
import cv2
#saved image from ros image message with shape (1520, 2688, 3)
image = cv2.imread("xxx.png")
#saved point cloud from ros PointCloud2 message converted to numpy with shape (67719, 3)
point_cloud = np.loadtxt("xxx.txt")
#camera lidar calibration with direct_visual_lidar_calibration
#projection matrix 4x4 from matrix converter
# projection_matrix = np.float64([[0.06767, -0.11368, 0.99121, -0.02107],
# [-0.99735, 0.01882, 0.07025, -0.14916],
# [-0.02665, -0.99334, -0.11211, 0.17919],
# [0.00000, 0.00000, 0.00000, 1.00000]
# ])
#inversed projection matrix 4x4 from matrix converter
projection_matrix = np.float64([[0.06767, -0.99735, -0.02665, -0.14257],
[-0.11368, 0.01882, -0.99334, 0.17841],
[0.99121, 0.07025, -0.11211, 0.05145],
[0.00000, 0.00000, 0.00000, 1.00000]
])
def fuse_lidar_camera(image, point_cloud, projection_matrix):
(rvec, jac) = cv2.Rodrigues(projection_matrix[:3, :3])
tvec = projection_matrix[:3, 3]
camera_matrix = np.float64([[1718.65976, 0. , 1345.4385 ],
[0. , 1718.88595, 796.10968],
[0. , 0. , 1. ]])
distortion = np.float64([[-0.378833, 0.102727, -0.002767, -0.003375, 0.0]])
projected_points, _ = cv2.projectPoints(point_cloud, rvec, tvec, camera_matrix, distortion)
# Create a blank image with the same dimensions as the input image
fused_image = np.zeros((image.shape[0], image.shape[1]), dtype=np.float32)
# Loop over each pixel in the projected point cloud and compute its depth value
for point in projected_points:
x, y = point[0][0], point[0][1]
cv2.circle(fused_image, (int(x), int(y)), 2, 255, -1)
grayscale_image = np.dot(image, [0.299, 0.587, 0.114])
# Combine the fused_image with the grayscale input image
combined_image = np.uint8(0.7*(grayscale_image)+0.3*(fused_image))
return combined_image
res = fuse_lidar_camera(image, point_cloud, projection_matrix)
cv2.imshow("fused", res)
cv2.waitKey(0)
cv2.destroyAllWindows()
The calibration looked really good inside the viewer itself.
But using this projection there is quite a bit of offset in the projection. Please see following crop from resulting image.
Any idea on how to overcome this issue?
Thanks.
The code itself looks good. Did you use pinhole
camera model for calibration?
Yes, I used this package: https://index.ros.org/p/camera_calibration/github-ros-perception-image_pipeline/#humble with the pinhole argument.
The calibration matrix is done with the manual calibration (initial_guess_manual) which was visually better than the fine registration. The automatic method was similar to this issue: https://github.com/koide3/direct_visual_lidar_calibration/issues/54
Don't really know where the offset comes from. The camera is mounted above the lidar, so if the result was the other way around it would be plausible and could have been some kind of extrinsic calibration fault.
Thanks for taking the time.
@schreiterjp Hi. I managed to solve this issue. This offset issue is based on Intrinsic parameter fx cx fy cy in output which is in correct format. you just went to fx fy cx cy like I did. values may be swapped. Try to check intrinsic parameters. I will share my output here.
Yes, I used this package: https://index.ros.org/p/camera_calibration/github-ros-perception-image_pipeline/#humble with the pinhole argument.
The calibration matrix is done with the manual calibration (initial_guess_manual) which was visually better than the fine registration. The automatic method was similar to this issue: #54
Don't really know where the offset comes from. The camera is mounted above the lidar, so if the result was the other way around it would be plausible and could have been some kind of extrinsic calibration fault.
Thanks for taking the time.
here is a package for ros2 that I have created https://github.com/AbdullahGM1/ros2_lidar_camera_fusion_with_detection
Hey @AbdullahGM1 Does it support for ros2 foxy?
I did not try it in Foxy, you can clone the package and try it
@koide3 , I am bit confused here.
When i analysed the code in the viewer, T_camera_lidar (which is inverse of T_lidar_camera https://github.com/koide3/direct_visual_lidar_calibration/blob/main/src/viewer.cpp line no: 108) is being used to project camera feed on LiDAR.
So i belive T_lidar_camera can be used directly to project LiDAR point cloud to camera and T_camera_lidar can be used to project camera on to the lidar point cloud, is not it?
And also i am trying to follow same approach but getting shift as below @jeychandar , i made sure that the intrinsics are correct, still i am not getting complete projection, Are there any other suggestions for this kind of behavior?
@koide3 , I am bit confused here.
When i analysed the code in the viewer, T_camera_lidar (which is inverse of T_lidar_camera https://github.com/koide3/direct_visual_lidar_calibration/blob/main/src/viewer.cpp line no: 108) is being used to project camera feed on LiDAR.
So i belive T_lidar_camera can be used directly to project LiDAR point cloud to camera and T_camera_lidar can be used to project camera on to the lidar point cloud, is not it?
And also i am trying to follow same approach but getting shift as below @jeychandar , i made sure that the intrinsics are correct, still i am not getting complete projection, Are there any other suggestions for this kind of behavior?
Hello,
I have created a package for lidar/camera fusion (Package)
I think you can give it a shot.
@AbdullahGM1 , Thanks for quick reply.
However i had a look into your package, looks like i need detections, is not it?
Is it possible to project only lidar point cloud on image with this package with out detections?
@AbdullahGM1 , Thanks for quick reply.
However i had a look into your package, looks like i need detections, is not it?
Is it possible to project only lidar point cloud on image with this package with out detections?
Yes, you can project all the lidar point clouds, but you need to modify the code. Because I add filters to the point cloud. You can just delete that part ad you will get all the point cloud
Hi,
thanks for the great project and easy calibration of lidar and camera.
As a nice addition to the project itself I would like to request a ROS 2 node for the implementation of fusion/overlay from camera image and lidar point cloud. The calibration data could directly be used to publish a fused point cloud.
Don't know what's the best way to republish, but I could imagine something like the following:
A publishing ROS 2 node would be also nice so that the results can be visualised in RVIZ.
Didn't found a project using ROS 2 for that kind of task, also cool if you have any suggestions.
Thanks again for the really nice work!
Cheers jp