kazuto1011 / deeplab-pytorch

PyTorch re-implementation of DeepLab v2 on COCO-Stuff / PASCAL VOC datasets
MIT License
1.09k stars 282 forks source link

GPU memory usage is very high #74

Closed Kenneth-X closed 5 years ago

Kenneth-X commented 5 years ago

my gpu:Tesla P100-PCIE, 16276MiB Memory

my batch size =2 and gpu id =0,1 (1 image for each gpu) ,input size=(513,513) when the training is start ,the memory usage is very high(12756MiB / 16276MiB) and drop to normal after a while(4689MiB / 16276MiB)

This makes a very low input size, as i can not input a biger size(1000,1000) beacuse memory usage will blow up at first

what makes this happen? Can it be optimised? how can i solve it

kazuto1011 commented 5 years ago

It seems that cuDNN looked for the optimal conv algorithm at the very first iteration (when the input size changes, to be precise). Please try disabling the benchmark mode. https://github.com/kazuto1011/deeplab-pytorch/blob/79cb3900bea41aa45cd9848bcd6b9e7e8177e6d0/main.py#L115

Kenneth-X commented 5 years ago

Yeah, it worked Really help me a lot, thank you for your help