carla-simulator / carla

Open-source simulator for autonomous driving research.
http://carla.org
MIT License
11.37k stars 3.69k forks source link

Bounding Box ground truth incompatibility #1312

Closed mzahran001 closed 5 years ago

mzahran001 commented 5 years ago

Pointcloud data is not compatible with the bounding boxes ground truth in Carla 0.9.3. It seems that there is a problem in the position of these boxes.

Have you checked their compatibility ?

nsubiron commented 5 years ago

What sort of incompatibility?

Also keep in mind that the volumes reported by the Lidar are simplified collision boxes

image

mzahran001 commented 5 years ago

@nsubiron Thank you for your response! I really want to thank you for your efforts with Carla. I will try to summarize the problems I faced with the Lidar until now with Carla (Stable version and 0.9.3).

  1. The point cloud axis is flipped so If I tried to plot the pure point cloud returned from Carla I will find it upside down newplot

  2. The second problem with point cloud is related to object level annotation (Semantic segmentation, Instance segmentation, and Bounding boxes). It will be helpful from you if you decided to put some effort in this direction making it easy for whom using Carla to produce the annotation. So there is no obvious, well-defined way to annotate the point cloud using the information that we have from Carla. For example, we do not know the relationship between the camera and lidar. I cannot find the projection matrix that helps me to project for example semantic camera on the lidar, segmentation on lidar or vice versa.

The second issue is there is no obvious relationship between the point cloud and the bounding box that is produced from Carla

The blue points are the point cloud and the orange ones are the centers of the boxes extracted using this function agent.vehicle.transform.location. It is obvious that we have a scaling and shifting problem

mzahran001 commented 5 years ago
  1. Relativity problem: There is 3 reference points in Caral until know (World reference, Vehicle Reference and sensor reference). So there is no obvious way to transfer between those three references or at least extract this matrices.

  2. Finally, Carla returns all the boxes in the map. It do not pick the ones that sensor is seeing at the moment. I am aware with the method that implemented in Carla 0.9.3 that helps me to find the nearby cars. I think it will be crucial if Caral returns from the first only the cars that the sensor is seeing

mzahran001 commented 5 years ago
  1. There is a weird object behind the car (This problem appeared in the stable version). I am confident that there is no object here. Inkednewplot_LI
analog-cbarber commented 5 years ago

All sensors in CARLA have a transform, so you do have enough information to translate coordinates from one sensor to another. You also have enough information to compute the projection matrices for the camera sensors. CARLA does not provide an API for translating or projecting points from the various coordinate systems (AFAIK), so you have to write your own or find a suitable 3rd party library.

It is not too hard to cull bounding boxes for objects that are not within a camera frustum or that are too far away. Culling occluded bounding boxes is a harder problem. For our project, we used a depth sensor (I imagine the LIDAR should work as well) to test to see if rays directed at a bounding box actually reach the box for some set of test points. This works very well for boxes that are entirely visible or entirely occluded, and still does a pretty good job on the partly occluded instances as well.

stale[bot] commented 5 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

barbierParis commented 5 years ago

hey @moh3th1 , Did you find another way to get semantic labels from the Lidar sensor ??