EPVelasco / pc2image

This code converts a point cloud obtained by a Velodyne VLP16 3D-Lidar sensor into a depth image mono16.
BSD 2-Clause "Simplified" License
13 stars 3 forks source link

can we extract coordinate XYZ from the given X,Y pixel coordinate? and also, can we convert this range image to opencv image? #3

Closed Shibaditya99 closed 2 years ago

EPVelasco commented 2 years ago

Hello. To convert the depth image back to XYZ coordinates, it is necessary to know the characteristics of the sensor. You need to know what the angles are between the laser beams of each layer. One way around this is to save the Z axis data for each of the points, this way it is easier to convert the depth image to a point cloud. I'm working on that code currently and hope to finish it this month.

Images published to ROS can be used with opencv with the CVbridge libraries.

EPVelasco commented 2 years ago

I have a package that can reconstruct the image into a point cloud with the z-axis information. You can see it on image2pc.

You need another branch of pc2image.

Shibaditya99 commented 2 years ago

Okay.. I'm gonna try it today. I just wanted to extract the real distance between lidar and detected object by using the pixel coordinates. So how can I get it? Any idea? And what is the "max_angle_width" and "max_angle_height"? In the launch file, both are 360.

EPVelasco commented 2 years ago

max_angle_width is the maximum angle along the XY plane converted to the image. (0°-360°) max_angle_height is the maximum perpendicular angle to the XY plane converted to image. This value depends on the type of sensor. For the velodyne VLP16, the perpendicular viewing angle is 30°. Setting 360 means that it will convert the entire point cloud.

For more information about these angles you can review the documentation of: pcl::RangeImage::createFromPointCloud

Shibaditya99 commented 2 years ago

Thanks, and what about the first one? can we extract real distance?

EPVelasco commented 2 years ago

In the image2pc code (in the for loop on line 109), there is a way to get the distance to a pixel.

In short, the pixel coordinate (u,v) is transformed into an XYZ coordinate by converting the location of pixel u with the formula:

float ang= M_PI - ((2.0 * M_PI * j )/(l_img.cols)) ; 

This angle is then used to convert the image data to an X coordinate with:

float pc_x_data = sqrt(pow(img_data,2)- pow(z_range(i,j),2)) * cos(ang); 

and a Y coordinate with:

float pc_y_data = sqrt(pow(img_data,2)- pow(z_range(i,j),2)) * sin(ang);

Finally, we have the Z coordinate through the image on the right that was saved to pc2image.

Shibaditya99 commented 2 years ago

What is the z_range(i,j) and the img_data here?

EPVelasco commented 2 years ago

As shown in the pc2image image, z axis is the right part of the image and img_data is the left part of the image. In the image2pc code splits the pc2image image into z_range and img_data, and converts it into a point cloud again.

Shibaditya99 commented 2 years ago

how to convert this z_range(i,j) for a single point? not for a ROI.

EPVelasco commented 2 years ago

We have the Z coordinate through the image on the right that was saved in pc2image. In the line we save the coordinates of the Z-axis of the point cloud in an image. Then in the line we convert the image information into Z-coordinate again.