I 'm training faster_rcnn_orpn_r50_fpn with samples_per_gpu = 1, It takes up to 7.5GB cuda memory, while inference phase only take about 2GB. Is this behaviour normal, I 've set gpu_assign_thr to 1 but the memory doesn't minimize.
I haven't paid much attention to memory occupation. I think 7.5 GB cuda memory is durable for me. The cuda memory of torch is actually higher than it used.
I 'm training faster_rcnn_orpn_r50_fpn with samples_per_gpu = 1, It takes up to 7.5GB cuda memory, while inference phase only take about 2GB. Is this behaviour normal, I 've set
gpu_assign_thr
to 1 but the memory doesn't minimize.