Hi,
As you may know, Kitti labels are expressed in camera's frame.
However, in bbox_camera2lidar function, we use the same rotation_y as box angle, but as the camera's y is lidar's -z, the yaw of the obstacle in lidar frame should be -rotation_y - pi/2, don't you think so ?
Also, obstacle dimensions are height, width, length (https://github.com/bostondiditeam/kitti/blob/master/resources/devkit_object/readme.txt); but in bbox_camera2lidar, the dimensions are flipped in this way :
xyz_size = np.concatenate([z_size, x_size, y_size], axis=1). So PointPillars is trained with dimensions in length, height, width order right ?
Hi, As you may know, Kitti labels are expressed in camera's frame.
However, in bbox_camera2lidar function, we use the same rotation_y as box angle, but as the camera's y is lidar's -z, the yaw of the obstacle in lidar frame should be -rotation_y - pi/2, don't you think so ?
Also, obstacle dimensions are height, width, length (https://github.com/bostondiditeam/kitti/blob/master/resources/devkit_object/readme.txt); but in bbox_camera2lidar, the dimensions are flipped in this way :
xyz_size = np.concatenate([z_size, x_size, y_size], axis=1)
. So PointPillars is trained with dimensions in length, height, width order right ?