Closed mzahran001 closed 5 years ago
What sort of incompatibility?
Also keep in mind that the volumes reported by the Lidar are simplified collision boxes
@nsubiron Thank you for your response! I really want to thank you for your efforts with Carla. I will try to summarize the problems I faced with the Lidar until now with Carla (Stable version and 0.9.3).
The point cloud axis is flipped so If I tried to plot the pure point cloud returned from Carla I will find it upside down
The second problem with point cloud is related to object level annotation (Semantic segmentation, Instance segmentation, and Bounding boxes). It will be helpful from you if you decided to put some effort in this direction making it easy for whom using Carla to produce the annotation. So there is no obvious, well-defined way to annotate the point cloud using the information that we have from Carla. For example, we do not know the relationship between the camera and lidar. I cannot find the projection matrix that helps me to project for example semantic camera on the lidar, segmentation on lidar or vice versa.
The second issue is there is no obvious relationship between the point cloud and the bounding box that is produced from Carla
In Caral 0.9.3: I tried to come up with a relationship between the bounding box and the point cloud by trial and error and some geometry. But this is applicable for only one run. This means if I collected the data (1000 Frame) and restarted the server to collect another (1000), all the values that I found will become useless. The relation between the point-cloud and the boxes changes every run! This what I meant by Bounding Box ground truth incompatibility
In the stable version: Also there is the problem that we cannot project or relate the bounding box to the point cloud. So If we tried to plot the pure centers of the bounding box with point cloud we will have this output
The blue points are the point cloud and the orange ones are the centers of the boxes extracted using this function agent.vehicle.transform.location
. It is obvious that we have a scaling and shifting problem
Relativity problem: There is 3 reference points in Caral until know (World reference, Vehicle Reference and sensor reference). So there is no obvious way to transfer between those three references or at least extract this matrices.
Finally, Carla returns all the boxes in the map. It do not pick the ones that sensor is seeing at the moment. I am aware with the method that implemented in Carla 0.9.3 that helps me to find the nearby cars. I think it will be crucial if Caral returns from the first only the cars that the sensor is seeing
All sensors in CARLA have a transform, so you do have enough information to translate coordinates from one sensor to another. You also have enough information to compute the projection matrices for the camera sensors. CARLA does not provide an API for translating or projecting points from the various coordinate systems (AFAIK), so you have to write your own or find a suitable 3rd party library.
It is not too hard to cull bounding boxes for objects that are not within a camera frustum or that are too far away. Culling occluded bounding boxes is a harder problem. For our project, we used a depth sensor (I imagine the LIDAR should work as well) to test to see if rays directed at a bounding box actually reach the box for some set of test points. This works very well for boxes that are entirely visible or entirely occluded, and still does a pretty good job on the partly occluded instances as well.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
hey @moh3th1 , Did you find another way to get semantic labels from the Lidar sensor ??
Pointcloud data is not compatible with the bounding boxes ground truth in Carla 0.9.3. It seems that there is a problem in the position of these boxes.
Have you checked their compatibility ?