junjie18 / CMT

[ICCV 2023] Cross Modal Transformer: Towards Fast and Robust 3D Object Detection
Other
339 stars 36 forks source link

Result without lidar #93

Open BeMuCa opened 11 months ago

BeMuCa commented 11 months ago

Hey @junjie18 , How did you do inference without the LiDAR data? Can you give me hints on how to reproduce it? I tried with empy lidar files, but i get a error. RuntimeError: CUDA error: invalid configuration argument

thanks!

junjie18 commented 10 months ago

@BeMuCa Add dict(type='ModalMask3D', mode='test', mask_modal='points') in the validation pipeline. https://github.com/junjie18/CMT/issues/10

dingmiaomiao commented 8 months ago

Thanks for your work! In paper, bevfusion can get 0.40 with mask-modal strategy when LiDAR sensor missing. But I added "ModalMask3D" to bevfusion and trained with mask-modal strategy, the result only is 0.25. Can you provide more technology details or point out my some error operations?

curiosity654 commented 1 month ago

Thanks for your work! In paper, bevfusion can get 0.40 with mask-modal strategy when LiDAR sensor missing. But I added "ModalMask3D" to bevfusion and trained with mask-modal strategy, the result only is 0.25. Can you provide more technology details or point out my some error operations?

Hello, I'm also insterested about this problem, could you please provide more details of your experiments? E.g. which codebase do you use (mmdet3d or MIT), and the pipeline of your training (train from scratch for 20 epoch or init with pretrained for 6 epochs)? Thank you in advance.