Closed tiesus closed 9 months ago
Please have a look at the dev kit of the KITTI dataset (maybe object detection benchmark), which I think specifies that the rotation of detections is given in the camera coordinate frame? The transformation from LiDAR to the Camera is given by the calibration file, which allows to conversion of the corresponding transformation in LiDAR coordinates. It might be that the dimensions of the bounding box are flipped, etc.
But note that I'm not the maintainer of the KITTI dataset; we are just providing a tool for point-wise semantic segmentation labeling; not bounding boxes. Therefore, I cannot provide you with any support regarding the KITTI dataset itself.
Therefore, I'm closing this issue as it seems not related to the tool that we provide.
Hi,
thank you for the great tool. I want to label data to train PointPillars Network and use the implementation from Nvidia TAO Toolkit to do so. (https://docs.nvidia.com/tao/tao-toolkit/text/point_cloud/pointpillars.html#preparing-the-dataset) The label format is pretty much the same to KITTI except the last field for each label. In KITTI the last field is the rotation around the y-Coordinate in LiDAR-frame, while PointPillars requires the last field to be the rotation around Z-Axis in Lidar Frame.
Here is an example:
How can i convert the labeled data from this tool, to be compliant with the the PointPillars format?