EPVelasco / lidar-camera-fusion

The code implemented in ROS projects a point cloud obtained by a Velodyne VLP16 3D-Lidar sensor on an image from an RGB camera.
GNU General Public License v3.0
206 stars 41 forks source link

Using Ouster instead of Velodyne #4

Open bodisimo opened 1 year ago

bodisimo commented 1 year ago

Hey, My college and me try to use the Code u provided. But We use another lidar. Bouth of the Lidars use the Ros datatype Pointcloud2.

We changed the input ros topics and also the launch files. All starts without a problem.

If rviz starts, we can see the image from the lidar, and we can configure the inputs, that we can see the image from the camera. But we cant se any fusion.

Can you help us?

EPVelasco commented 1 year ago

Hi, could you tell me which Lidar do you use?

I would like to know if the point cloud is projected in laser layers. The Lidar I use is a VLP-16 and it projects 16 laser layers.

bodisimo commented 1 year ago

we use the ouster os1-128-u. What also comes in mind... is that the image is not shown automaticly with the rviz that you provided. We need to manuly change the input to see the image provided by the camera

EPVelasco commented 1 year ago

What camera are you using? Is the camera calibrated with the ouster os1-128-u?. I used the Lidar-Camera package for calibration.

If you don't see the image automatically in RVIZ, it could be because of the wrong image topic name.

Consider that the package I coded was made for a Lidar with few laser layers to interpolate. If you use a lidar with 128 laser layers, the interpolation could not be done in real-time due to the huge volume of point cloud data.

In the demonstration video, I used a 16-layer lidar and interpolated it up to 10 times, making a virutal 160-layer point cloud. Maybe you don't need to interpolate your os1-128-u lidar, but just have it calibrated correctly with the Lidar-Camera package.

bodisimo commented 1 year ago

We just saw that we use ros noetic. Do you know if this package is also oke to use?

EPVelasco commented 1 year ago

The Lidar-camera calibration has a package branch whit Ros-noetic. link You have to clone the package with that branch name implement-noetic-action

git clone https://github.com/ankitdhall/lidar_camera_calibration.git -b implement-noetic-action

Our Lidar-Camera fusion code was tested with Ros noetic. The results are detailed in this issue

bodisimo commented 1 year ago

Hmm. Would it be possible that you look oder Our Problem? If you help us we will thank with a Little bit of paypal Support :D

EPVelasco commented 1 year ago

The purpose of the issues is to let the github community know about the bug and fix it. You can upload the problem in these messages and we will help you at no cost. Tell me, what is the problem? have you been able to use the Lidar-Camera package?

abhishekt711 commented 1 year ago

I have calibrated Velodyne HDL-64 LiDAR and camera. I gave the correct input topics in launch file. but i am able to visualize only the LiDAR data and camera image separately. I am not able to visualize fused data and depth detection.Can you Please help.

EPVelasco commented 1 year ago

hello can you upload an image of rqt_graph to visualize the nodes and their connections? To do this in a terminal type:

rqt_graph
abhishekt711 commented 1 year ago

rosgraph Please check the rqt_graph attached here. I am unable to visualize /pcOnImage_image /points2 these topics.

EPVelasco commented 1 year ago

please only launch the node the lidar_camera_node

  roslaunch lidar_camera_fusion vlp16OnImg.launch 

This node already performs the point cloud interpolation internally. This node must subscribe to the /velodyne_points topic. Make sure that the file vlp16OnImg.launch has the correct spelling of the topic name.

abhishekt711 commented 1 year ago

rosgraphn I have only launch the node lidar_camera_node with this roslaunch lidar_camera_fusion vlp16OnImg.launch , please find the rqt_graph image now.

EPVelasco commented 1 year ago

The lidar_camera_node node is not subscribing to the image topic. Checks the name of the image topic going to the node. If you don't know how to see the topics of your rosbag use:

rosbag info name_of_your_rosbag.bag

As you are using a rosbag I suggest you use the launch vlp16OnImg_offline.launch.

  roslaunch lidar_camera_fusion vlp16OnImg_offline.launch 

The lidar camera node needs the topics /velodyne_points and the image topic /camera/color/image_raw. Check the topic names of your rosbag and modify the file vlp16OnImg_offline.launch.

abhishekt711 commented 1 year ago

unable to visualize /points2 topic and image is only coming in /pconimage_image topic rosgraphnew rviz_screenshot_2023_03_04-02_01_17 rviz_screenshot_2023_03_04-02_01_58

i can visualize velodyne_points topic but not /points2. also pconImage_Image and image are visualized as same in rviz

EPVelasco commented 1 year ago

Could you check if the image and point cloud topics of your rosbag have the time in the header?

The lidar camera node subscribes to both topics by synchronizing them with the time stamp in the topic header.

abhishekt711 commented 1 year ago

yes it have time stamp. few month back, with some other algorithm i have already projected this point cloud on image. then applied yolo on projected point cloud on image. it classify obstacle and create bounding box, inside bounding box the points from that obstacle is also there. But my next doubt is how to calculate the depth of that obstacle?

EPVelasco commented 1 year ago

Could you show me the roslaunch. file? I want to see the parameters you have set.

I also want to know the calibration matrix of the cfg folder.

abhishekt711 commented 1 year ago

please check the cfg file and launch file. Also can u see the 2nd pic and help me in finding the depth of obstacle.

Screenshot 2023-03-03 213143

can u please see this , i have projected pointcloud on image before and applied yolo on that now i want to calculate depth of the obstacle, please check the image

Screenshot 2023-03-03 212952
EPVelasco commented 1 year ago

Have you obtained the rlc and tlc values with any calibration method? If you have not used the one suggested in the redme, it may be that your coordinate system is different from the one I used. Try modifying the code lines 326, 327 and 328 of node.

And set them to: pc_matrix(0,0) = P_out->points[i].x;
pc_matrix(1,0) = P_out->points[i].y;
pc_matrix(2,0) = P_out->points[i].z;

EPVelasco commented 1 year ago

To calculate the depth of an object detected by yolo it is necessary to know the pixel coordinates of the detected objects, then the bounding box of each detected object is traversed with two FOR loops and inside the loop you ask the value of the depth point that is projected onto the image plane.

Then, you can average those values or take their median. The article discusses how you calculate the distances of the objects and how a reduction of the bounding box improves the depth estimation.

abhishekt711 commented 1 year ago

Thank you so much for helping me out and giving ur time. The point coloud projected on image is working perfectly and also the coloured point cloud is working perfectly. Now can u please help me with the reference code for depth estimation of obstacle?

EPVelasco commented 1 year ago

Sorry but I don't have any code on github that does the process. I'm working on one but I can't make it public yet.

af-doom commented 1 year ago

Thank you so much for helping me out and giving ur time. The point coloud projected on image is working perfectly and also the coloured point cloud is working perfectly. Now can u please help me with the reference code for depth estimation of obstacle?

Hello, have you continued your research on depth estimation