Closed SxJyJay closed 2 years ago
Hi, yes you can change it to data_root=data_root
to solve this problem.
You can use sample_per_gpu=1 and reduce the lr by half accordingly. It will not have a large effect on the final performance.
Hi, yes you can change it to
data_root=data_root
to solve this problem.You can use sample_per_gpu=1 and reduce the lr by half accordingly. It will not have a large effect on the final performance.
Thanks for your reply. Can you tell me the requested GPU memory when training TransFusion LIDAR+Camera version with 2 samples on each GPU? This statistic can be an important reference for me to look for other types of GPUs.
Hi, I do not remember the exact number but 11G is definitely not enough for samples_per_gpu=2
, and I have trained TransFusion with samples_per_gpu=1
on a TITAN XP with 12G memory, so you can first try this configuration on your 2080. If it can not fit into your GPU, my suggestion is to use a newer version of spconv
, I remember I have tried spconv 1.2.1
and found it can lead to a remarkable memory reduction.
Thanks a lot!
In https://github.com/XuyangBai/TransFusion/blob/ebd8ba4716bfbf324efc5164f53f9e9b8778556c/configs/transfusion_nusc_voxel_L.py#L33, the data root is set to None, while the "nuscenes_dbinfos_train.pkl" is actually under the data root "data/nuscenes/". Besides, I want to know the batch size's influence on the final performances, as my GPU memory (RTX 2080 Mem 11G) cannot accommodate 2 samples even when training the LiDAR-only TransFusion. Hence I have to set the sample_per_gpu to 1.