lgsvl / simulator

A ROS/ROS2 Multi-robot Simulator for Autonomous Vehicles
Other
2.3k stars 780 forks source link

Lidar Intensity model #146

Closed khaledelmadawi closed 3 years ago

khaledelmadawi commented 5 years ago

Hi All, as per what i see from rviz and Lidar data the intensity provided in the PC2 point cloud is linear model function in the material ID, or is it the material ID of he opponent vehicles ? image

martins-mozeiko commented 5 years ago

Currently lidar generate intensity as grayscale value (0..255) of color for point where it hits something.

khaledelmadawi commented 5 years ago

Thanks martin, my question was if it is linearly changing with two parameters(distance and material type), for example if it hit a near lane, in the middle it will have value(...), if it hit a lane by it's terminals it will have other value, Look to the green yellow values in the above image

or is it the lane intensity is always a set of unique intensities coming together that only depends on material ID of lanes?

martins-mozeiko commented 5 years ago

Intensity comes from texture that is applied to material. For example. Road texture with lanes is here: https://i.imgur.com/zlhhrxp.jpg When lidar hits this texture it takes RGB value, converts it to grayscale 0..255 and uses this number as intensity.

khaledelmadawi commented 5 years ago

Thanks martins, can the simulator provide a point cloud with fields{X,Y,Z,I,R,G,B,MaterialID}??

add x, y, z , i, r, g, b,materialID fields

fields = []
fields.append(PointField('x', 0, PointField.FLOAT32, 1))
fields.append(PointField('y', 4, PointField.FLOAT32, 1))
fields.append(PointField('z', 8, PointField.FLOAT32, 1))
fields.append(PointField('intensity', 12, PointField.FLOAT32, 1))
fields.append(PointField('R', 16, PointField.FLOAT32, 1))
fields.append(PointField('G', 20, PointField.FLOAT32, 1))
fields.append(PointField('B', 24, PointField.FLOAT32, 1))
fields.append(PointField('materialID', 28, PointField.FLOAT32, 1))
martins-mozeiko commented 5 years ago

Generating R,G,B values for each point is easy. Simulator is already getting them to calculate intensity as (R+G+B)/3. You'll need to modify LidarShader.shader and LidarSensor.cs files to store and retrieve R/G/B values and then package them in point cloud ROS/Cyber message.

Generating MaterialID would be larger change as GPU currently does not have such information. It would require some convention how to assign id to all materials and provide this information to GPU.

khaledelmadawi commented 5 years ago

Thank you martin for you informative reply, can the simulator provide a semantic segmentation of the surrounding environment?

martins-mozeiko commented 5 years ago

Yes, it can. Enabling "Segmentation Camera" sensor will provide semantic segmentation image over ROS topic. You will see following image: https://youtu.be/NgW1P75wiuA?t=213 (watch from 3:33).

khaledelmadawi commented 5 years ago

no I mean semantic segmentation for Lidar point cloud

martins-mozeiko commented 5 years ago

Lidar sensor itself is not providing this.

Currnently you could enable GroundTruthSensor3D which will publish 3D bounding boxes for all vehicles and pedestrians. Then when you are processing lidar point cloud, you will be able to check if point is inside one of bounding boxes to know which object does point belong to.

khaledelmadawi commented 5 years ago

is this feature is planned in the coming releases?

kxhit commented 4 years ago

Hi guys! I have the similar question about the semantic point clouds generation. Could I obtain semantic point clouds from this simulator? Assign every lidar point (xyz) with a semantic label (e.g. car, building, road, vegetations) like Semantic-KITTI does. Thanks a lot if anyone could give me some guidance.

rongguodong commented 4 years ago

Semantic-segmented point cloud is not provided at this time. But as @martins-mozeiko pointed above, you can use the output of Lidar sensor and GroundTruth3DSensor to get segmented point cloud yourself.

EricBoiseLGSVL commented 4 years ago

@khaledelmadawi @kxhit We do not have plans to implement this feature right now but we will look to see if we can add to our roadmap. We want to users to be able to leverage the simulator for uses outside of what we intended and this is a great example. If you start a PR while we figure out how to allocate resources to this it will speed up development.

Also. please post any specifics you have to help scope this feature

rongguodong commented 4 years ago

@khaledelmadawi @kxhit Before we support segmented Lidar sensor, you may take a look into the source code of current Lidar sensor and segmented camera sensor, and combine them to have a segmented Lidar sensor. You are welcome to contribute it here. :)