Closed user025 closed 5 years ago
I'm experiencing the same problem. The Lidar sensor seems to "see" (the laser is reflected at) the bounding box of a vehicle instead of its mesh.
Upper left is a screenshot from the Jeep and its collision Bounding box and it's Unreal Editor, lower left is an image from a regular camera showing the Jeep on the far left of the image, right is the Lidar PointCloud of the Jeep being "just" a cube with spheres at the positions of the wheels.
The Jeep seems to have the biggest overhang of all vehicle models, but these huge spheres significantly increase the size of the vehicles in the Lidar data and cause problems in navigation and driving as the clearance is reduced.
@johannesquast Then it makes sense. Because carla do generate lidar data by collision detection... Would you kindly share the way you find the collision bounding box? There are several collision profiles in UE4. I would like to have another try to see if there is anything I can do with those config.
@user025: Sure.
This is a known issue: the bounding box is also a collision box, hence it's better from performance perspective to have it as simple as possible.
There is a way to change this: https://github.com/carla-simulator/carla/issues/1010#issuecomment-444318690 (simple convex hull works well too though) Don't change 2-wheelers, as it affects their physics and their wheels behave "wonky"
Changing to depth-view based point cloud fixes everything and the detail is far better though (and faster).
@Soolek Thanks for your kind reply. I understand the collision box point. It's just confusing to see some part out of a bounding box.
I don't use depth-view based point cloud because our project needs all the information in the LidarMeasurement class, as well as groundtruth, not just a point cloud.
The collision based lidar can get those information more easily and accurately (although it's so slow, however I've found some monkey ways to speed it up a litter).
A depth-view based point cloud to me is more like a image. I haven't figured out how to make it abiding to the constrains set by channel, upper-fov, lower-fov. It would be of great help if there is any solutions.
Hey @user025 , How did you manage to get the groundtruth data if you mind sharing ? I keep reading the doc, but I can't find a way to get labels.
I'm using a precompiled CARLA package, version 0.9.5
The pointcloud of a Volkswagan T2 looks like this:
Both the car and the lidar is static. Here is the way I generated the bounding box:
(The axis is the world's axis, and the origin point is set to the car's position. Just ignore the offsets and focus on the scale.)
This problem make it hard to generate groundtruth from pointcloud, because points of wheels are beyond the bounding box.
It is an expected behavior? How can I improve it?