Closed zheng0819 closed 6 months ago
Thanks for your interest. In the config file of SparseFusion, we freeze the LiDAR branch by setting freeze_lidar_components=True
and freeze_lidar_detector=True
. Alternatively, for the LiDAR branch training, you can refer to the config file of TransFusion-L (https://github.com/yichen928/SparseFusion/blob/main/configs/transfusion_nusc_voxel_L.py).
Thank you, I have another question, I observed that transfusion_L requires 8 gpus to train, will the accuracy go down if I train with only four gpus?
I am not sure, maybe the accuracy would slightly go down. If you have to reduce the batch size, you should adjust the learning rate proportional to the batch size. In our experiment, we use 4 GPUs with larger memory to ensure the same total batch size (samples_per_gpu x num_gpus).
Okay, I got it. Thank you very much!
There are some more details, when training LiDAR branches, in addition to using the fade strategy, is it necessary to use the nuscenes' training set and validation set together as training data? Thanks.
If you want to submit the results to the leaderboard, you need to combine the training and validation sets. Otherwise, you may just use the training set.
Hi, thank you for your excellent work. Regarding the LiDAR branch, I have a question: I want to improve the LiDAR branch, so I'm thinking of retraining it separately on my own, and then freezing it. How can I train the LiDAR branch separately?