open-mmlab / mmdetection3d

OpenMMLab's next-generation platform for general 3D object detection.
https://mmdetection3d.readthedocs.io/en/latest/
Apache License 2.0
5.22k stars 1.53k forks source link

Selecting label files for Waymo dataset #403

Open deeptibhegde opened 3 years ago

deeptibhegde commented 3 years ago

Hello, After converting Waymo data to KITTI format, there are image* and label* folders. When using this format for training a network configured for kitti, does the image_2 folder correspond only with the label_2 folder? Should the label_all folder not be used in this case?

I am trying to train PointRCNN https://github.com/sshaoshuai/PointRCNN using the Waymo data converted to KITTI but I think there is a label mapping error since the losses are very noisy and not reducing at all

Tai-Wang commented 3 years ago

Image_x should only correspond with label_x folder, and label_all contains all the surrounding boxes.

The main difference between KITTI and Waymo is that we only make predictions in the frontal view on KITTI. If you want to get a good performance on Waymo, you need to change the detection range in the corresponding model config.

For your case, I guess there maybe some inconsistency in terms of the generated info between our codebase and the implementation of PointRCNN. Have you ever tried to use the infos generated from mmdet3d to train the PointRCNN on KITTI?

deeptibhegde commented 3 years ago

The implementation of PointRCNN I am using does not use info files and instead loads the files directly. I have changed the point cloud ranges accordingly, but still am running into trouble. The OpenPCDet implementation seems to generate infos differently, since attempting to use these generated infos in the place of kitti infos results in key errors.

Alternatively, generating the infos from the Waymo-KITTI converted files using the OpenPCDet create_data function results in similar noisy loss curves that do not converge.

Any help would be appreciated!!

Tai-Wang commented 3 years ago

Actually, there is no guarantee that any model can be smoothly transferred to new settings and datasets, especially for LiDAR-based 3D detection. Both the pointpillars and SECOND provided by our codebase also need much finetuning. I think you may need to find some experiences about tuning PointRCNN on Waymo, or at least on new datasets, like how to set the learning rate/batch size/anchors, etc.

deeptibhegde commented 3 years ago

Yes, have been experimenting with those parameters. I will update here in case I have any luck.

deeptibhegde commented 3 years ago

Update: https://github.com/cxy1997/3D_adapt_auto_driving provides a useful format conversion method.