Hi, thank you for sharing such a great work.
Currently, I understand that BEVFusion is trained in a 2-stage process.
In the 1st stage, following Transfusion-L, the LiDAR branch is trained using the CBGS dataset.
In the 2nd stage, CBGS is not used, and the LiDAR branch is trained together with the camera branch.
Have you had any experience experimenting with this process using the CBGS dataset in a single stage? If so, could you please share the training details and how the results turned out?
Thank you for your interest in our project. This repository is no longer actively maintained, so we will be closing this issue. Please refer to the amazing implementation at MMDetection3D. Thank you again!
Hi, thank you for sharing such a great work. Currently, I understand that BEVFusion is trained in a 2-stage process. In the 1st stage, following Transfusion-L, the LiDAR branch is trained using the CBGS dataset. In the 2nd stage, CBGS is not used, and the LiDAR branch is trained together with the camera branch. Have you had any experience experimenting with this process using the CBGS dataset in a single stage? If so, could you please share the training details and how the results turned out?