HKUST-Aerial-Robotics / Stereo-RCNN

Code for 'Stereo R-CNN based 3D Object Detection for Autonomous Driving' (CVPR 2019)
MIT License
690 stars 177 forks source link

What shall I need to pay attention to when I train my own stereo data #42

Open CODEDISON opened 5 years ago

CODEDISON commented 5 years ago

Hi,Li I here trying to train my own data.Different from kitti's dataset,images I collect are grey scale and 752x480. For the label part,BBox can be easily labeled,but what about the rest of them,like alpha,dimensions,etc.Are these essential to the training part?'cause orientation is so hard to label.Data I collect already have calib,velodyne and left-right image(grey scale). Looking forward to your reply,much obliged.

CODEDISON commented 5 years ago

@PeiliangLi

PeiliangLi commented 5 years ago

Basically, there are several options:

  1. Only label 2D box and 3 keypoints for each car, then we can estimate the 3D box for all non-truncated normal cars. These annotations can be labeled using pure 2D mage.
  2. Only label 2D box, 3 keypoints, and viewpoint angle for each car, then we can estimate the 3D box for all normal cars.
  3. Label 2D box, 3 keypoints, viewpoint angle, and 3D size for each car, then we can estimate the 3D box for all cars (include variant size).
CODEDISON commented 5 years ago

Thanks for reply! But now I'm struggling to label viewpoint and 3D size.Have any advice on labeling these annotations?