Open mariya12290 opened 3 years ago
Hello @mariya12290 , I work actually on the same issue. Can you explain me the impact of the position height of the LIDAR please? For example, I use a BEV map as input of my model and when I change the Z range boundary (the height boundary of the Lidar) I obtain different results and some time no detections results. So the the range of Z boundary and so the position height of the Lidar sensor has an impact in the detections results but I didn't understand why. If you can help me. Thank you !
We release the extrinsics of each LiDAR which contains the height information of each LiDAR.
Hey @SofianeB-03, Most of the models for object detection with lidar have height as their hyperparameters. Let's consider the images, where you get a bunch of images and put the images into the model. But this is not the case with Lidar. When you are using non-linear data(means roads are not always flat), so when we create a BEV map for let's say for a certain height, length, and width [5,20,40]. In some of the lidar frames, the creation of BEV may be incorrect(which means, only half of the object is considered) then the network learns garbage. so less accuracy or no detection.
Note: This is my understanding, but I am not quite sure yet.
Thank you for your great explication! Maybe this information can help you: the points 3D Waymo are in the vehicle frame, and in the issue closed #99, I note the origin of the vehicle frame is near to the ground.
@peisun1115 , Thank you for your reply.
Hey @peisun1115 @SofianeB-03,
can someone tell the reason behind these kind of annotations in waymo data set? Or did I do any wrong in the data conversion from Kitti to waymo?
Here is the link this
@SofianeB-03 do you also have this kind of annotations in your waymo data set? could you please check and let me know?
Thanks in advance
Hey @SofianeB-03 If you have time, can you please check your data and let me know ?
Hello @mariya12290, I check some frames of my dataset waymo but I don't have this king of results. But I didn't use conversion between kitti to waymo. I get Points cloud 3D and the associated labels 3D using package waymo-open-dataset.
*kind of results. I check waymo Point cloud with Mayavi and it seems ok.
Hey @SofianeB-03
Can you check the frames, where the pedestrians are crossing near the car and occluded or truncated almost more than 80%. is there any ground truth for those objects?
Could you please check and let me now bocz I need to write my thesis report so?
Okay, I will do this and I come back to you.
Hey @SofianeB-03
Here you can see an example. In the below picture, almost pedestrian in the middle is completely occluded, but still it has ground truth. can you check is there any thing in your data sets?
I check my dataset but i didn't find situation with pedestrians occlused (because I use a subset of waymo data). However I find this in the official tutorial of waymo
Hey @SofianeB-03
Thank you so much, Now I got the clarity. waymo annotation is quite difficult for network to learn. sometimes difficulty level in annotations are below 25%.
Hey admin and all,
Could someone tell me the position of Lidar on waymo vehicle? In kitti data set, position of Lidar from ground is 1.73 meter. it is written in their paper. But I did not find any info about lidar position from ground in waymo paper.
Could someone help me with this?
Thanks in advance