Closed TossherO closed 2 months ago
The results I get look like this, the 3D GT bboxes and the image are obviously not aligned.
It is extremely challenging to control projections perfectly and collect data without any projection errors. This is particularly true in outdoor scenes, where uneven surfaces often cause the robot frame to shake, leading to greater projection errors compared to indoor environments.
Thank you for your understanding.
Hi! I'm trying to project 3D GT bboxes to the images, but the projected bboxes don't align with the targets on the image. I would like to know if the cameras' intrinsic and extrinsic parameters provided with the dataset are accurate, and if the cameras' intrinsic parameters take into account the distortion of the image. The provided code does not use images. if possible could you provide some examples of using images, such as transformation between images and point clouds, or 3D object detection using Transfusion (It was mentioned in your benchmark). Thank you very much!