XuyangBai / TransFusion

[PyTorch] Official implementation of CVPR2022 paper "TransFusion: Robust LiDAR-Camera Fusion for 3D Object Detection with Transformers". https://arxiv.org/abs/2203.11496
Apache License 2.0
643 stars 77 forks source link

A bug in config file #6

Closed SxJyJay closed 2 years ago

SxJyJay commented 2 years ago

In https://github.com/XuyangBai/TransFusion/blob/ebd8ba4716bfbf324efc5164f53f9e9b8778556c/configs/transfusion_nusc_voxel_L.py#L33, the data root is set to None, while the "nuscenes_dbinfos_train.pkl" is actually under the data root "data/nuscenes/". Besides, I want to know the batch size's influence on the final performances, as my GPU memory (RTX 2080 Mem 11G) cannot accommodate 2 samples even when training the LiDAR-only TransFusion. Hence I have to set the sample_per_gpu to 1.

XuyangBai commented 2 years ago

Hi, yes you can change it to data_root=data_root to solve this problem.

You can use sample_per_gpu=1 and reduce the lr by half accordingly. It will not have a large effect on the final performance.

SxJyJay commented 2 years ago

Hi, yes you can change it to data_root=data_root to solve this problem.

You can use sample_per_gpu=1 and reduce the lr by half accordingly. It will not have a large effect on the final performance.

Thanks for your reply. Can you tell me the requested GPU memory when training TransFusion LIDAR+Camera version with 2 samples on each GPU? This statistic can be an important reference for me to look for other types of GPUs.

XuyangBai commented 2 years ago

Hi, I do not remember the exact number but 11G is definitely not enough for samples_per_gpu=2, and I have trained TransFusion with samples_per_gpu=1 on a TITAN XP with 12G memory, so you can first try this configuration on your 2080. If it can not fit into your GPU, my suggestion is to use a newer version of spconv, I remember I have tried spconv 1.2.1 and found it can lead to a remarkable memory reduction.

SxJyJay commented 2 years ago

Thanks a lot!