Closed Shibaditya99 closed 2 years ago
Okay.. I'm gonna try it today. I just wanted to extract the real distance between lidar and detected object by using the pixel coordinates. So how can I get it? Any idea? And what is the "max_angle_width" and "max_angle_height"? In the launch file, both are 360.
max_angle_width is the maximum angle along the XY plane converted to the image. (0°-360°) max_angle_height is the maximum perpendicular angle to the XY plane converted to image. This value depends on the type of sensor. For the velodyne VLP16, the perpendicular viewing angle is 30°. Setting 360 means that it will convert the entire point cloud.
For more information about these angles you can review the documentation of: pcl::RangeImage::createFromPointCloud
Thanks, and what about the first one? can we extract real distance?
In the image2pc code (in the for loop on line 109), there is a way to get the distance to a pixel.
In short, the pixel coordinate (u,v) is transformed into an XYZ coordinate by converting the location of pixel u with the formula:
float ang= M_PI - ((2.0 * M_PI * j )/(l_img.cols)) ;
This angle is then used to convert the image data to an X coordinate with:
float pc_x_data = sqrt(pow(img_data,2)- pow(z_range(i,j),2)) * cos(ang);
and a Y coordinate with:
float pc_y_data = sqrt(pow(img_data,2)- pow(z_range(i,j),2)) * sin(ang);
Finally, we have the Z coordinate through the image on the right that was saved to pc2image.
What is the z_range(i,j) and the img_data here?
how to convert this z_range(i,j) for a single point? not for a ROI.
Hello. To convert the depth image back to XYZ coordinates, it is necessary to know the characteristics of the sensor. You need to know what the angles are between the laser beams of each layer. One way around this is to save the Z axis data for each of the points, this way it is easier to convert the depth image to a point cloud. I'm working on that code currently and hope to finish it this month.
Images published to ROS can be used with opencv with the CVbridge libraries.