Hi,Li
I here trying to train my own data.Different from kitti's dataset,images I collect are grey scale and 752x480.
For the label part,BBox can be easily labeled,but what about the rest of them,like alpha,dimensions,etc.Are these essential to the training part?'cause orientation is so hard to label.Data I collect already have calib,velodyne and left-right image(grey scale).
Looking forward to your reply,much obliged.
Only label 2D box and 3 keypoints for each car, then we can estimate the 3D box for all non-truncated normal cars. These annotations can be labeled using pure 2D mage.
Only label 2D box, 3 keypoints, and viewpoint angle for each car, then we can estimate the 3D box for all normal cars.
Label 2D box, 3 keypoints, viewpoint angle, and 3D size for each car, then we can estimate the 3D box for all cars (include variant size).
Hi,Li I here trying to train my own data.Different from kitti's dataset,images I collect are grey scale and 752x480. For the label part,BBox can be easily labeled,but what about the rest of them,like alpha,dimensions,etc.Are these essential to the training part?'cause orientation is so hard to label.Data I collect already have calib,velodyne and left-right image(grey scale). Looking forward to your reply,much obliged.