First of all, thx for releasing this excellent work.
In general, the usage of GPU in finetuning stage is obviously less than that in pre-train stage.
However, I try to excute finetuning based on the pretrained model (trained with mem_efficient_vidar_1_8_nusc_3future_r50.py), the GPU usage reaches more than 40G.
Thus, can you inform the GPU usage when finetuning?
First of all, thx for releasing this excellent work. In general, the usage of GPU in finetuning stage is obviously less than that in pre-train stage. However, I try to excute finetuning based on the pretrained model (trained with mem_efficient_vidar_1_8_nusc_3future_r50.py), the GPU usage reaches more than 40G. Thus, can you inform the GPU usage when finetuning?
Thx.