Owen-Liuyuxuan / visualDet3D

Official Repo for Ground-aware Monocular 3D Object Detection for Autonomous Driving / YOLOStereo3D: A Step Back to 2D for Efficient Stereo 3D Detection
https://owen-liuyuxuan.github.io/papers_reading_sharing.github.io/3dDetection/GroundAwareConvultion/
Apache License 2.0
362 stars 77 forks source link

Model trained on KITTI train only + multi-class GAC #10

Closed vobecant closed 3 years ago

vobecant commented 3 years ago

Dear authors,

thank you very much for your repository. I would like to ask whether it would be possible for you to release the model from paper "Ground-aware Monocular 3D Object Detection for Autonomous Driving" on the training set only such that this model can be evaluated on the validation set. Do you think that it would be also possible to release the codes that would reproduce the results of this model?

Also, would it work to train the GAC model for multi-class detection? Have you tried it?

Thank you very much in advance.

Owen-Liuyuxuan commented 3 years ago

I am not sure I fully understand your question, but I will try to give some tentative answers.

  1. You can change the split file in the config file like here to train on the split of KITTI dataset you want (chen split or the entire training set. You can set your own split file). Due to some changes in the implementation details (in the second review stage and the publication stages, there are some changes in both the code and the paper), it is now more difficult to fully reproduce the validation set results of the paper, but it is still SOTA level.
  2. For multi-class detection, the result on pedestrians is rather good (we tested and it is almost SOTA level), but there will be a performance drop in cars.