Open buaaMars opened 5 years ago
@fmassa I met the same problem with retinanet, but fcos is working well. @buaaMars I tried cpython to solve it, but too slow. It seems that GPU memory was locked? When it cause OOM, the gpu0 or gpu1 was used. And I must kill the process.
I solved by add torch.cuda.empty_cache()
after
https://github.com/facebookresearch/maskrcnn-benchmark/blob/55796a04ea770029a80cf5933cc5c3f3f6fa59cf/maskrcnn_benchmark/engine/trainer.py#L77-L85
When gtbox >=500, it cause OOM(need 4.7G memory to caculate iou_rotate),so I set the max gtbox number to 300.
I solved by add
torch.cuda.empty_cache()
after https://github.com/facebookresearch/maskrcnn-benchmark/blob/55796a04ea770029a80cf5933cc5c3f3f6fa59cf/maskrcnn_benchmark/engine/trainer.py#L77-L85When gtbox >=500, it cause OOM(need 4.7G memory to caculate iou_rotate),so I set the max gtbox number to 300.
Thanks a lot ! How you find this method ?
I solved by add
torch.cuda.empty_cache()
after https://github.com/facebookresearch/maskrcnn-benchmark/blob/55796a04ea770029a80cf5933cc5c3f3f6fa59cf/maskrcnn_benchmark/engine/trainer.py#L77-L85When gtbox >=500, it cause OOM(need 4.7G memory to caculate iou_rotate),so I set the max gtbox number to 300.
But after added this, it only cost about 2~4G GPU memory, before that it will cost about 10G+ GPU memory, is it normal? By the way, I use the ResXNet101 with FPN.......
❓ Questions and Help
@fmassa @chengyangfu Hi, Thanks for reading my issue!
When I am training Retina Net, the memory consumption keep increasing until OOM. I have read several issues related to OOM. The causes to OOM assumed by you can be concluded as follows: 1) Different respect ratios make the PyTorch reallocate larger memory. In this case, there could be increase of memory occupation before the end of the first epoch. 2) The number of proposals affects the memory occupation. 3) Large number of gt bboxes requires consume large memory. In this case, the increase of memory occupation also happens before the end of the first epoch.
With the solution provided by you and help from Internet, I make some "improvement" with regard to these potential causes as follows: 1) For every image, I randomly crop a patch of it and resize it. The width and height of the patch and the resizing ratio are kept same for all images. Hence, all the input images have same width and height, except for one image in data set whose original size is smaller than the size of a patch. 2) Since Retina Net I am running is a one-stage detector, there are not proposals at all. 3) I use
torch.jit.script
according toI replace
with
4) I use
torch.cuda.empty_cache()
after the OOM happens. Also, when a input causes OOM rightly after I runtorch.cuda.empty_cache()
, I skip it as it has large number of gt bboxes. I replacewith
With these "improvement", memory consumption still increases in training. At first, the memory consumption reported by pytorch is 4 or 5G. Only inputs whose gt bboxes more than 800 cause OOM, and they are skipped after OOM. Gradually, the memory occupation increase. There always is a 2.7G leap after hundreds of iterations. Then, inputs who has 260+ gt bboxes can cause OOM. Thousands of iterations later, memory occupation showed by
nvidia-smi
approach the max memory of my GPU, that is 12 G. The minimum of the numbers of gt bboxes of the images causing OOM can be 100+. At last, OOM happens at every iteration, even with the images who has only 1 gt bboxes. The training cannot continue any more. I have to kill the program and restart it at the last checkpoint. At the first iterations after restart, the memory consumption is 4 or 5G, as little as the one at the start of training. Instead of rapidly increasing to the large memory occupation when I killed it, the memory occupation gradually increases just like I start the training from 0 iteration. It makes me feel like the training doesn't need that much memory at all.For the phenomenon I describe above, I believe the increasing memory occupation cannot be simply explained by some attributes of certain inputs because the number of "problem images" keep increasing with the number of epochs increasing, that is to say an input is OK at earlier iterations but causes OOM later. It is more like a memory leak. The "improvement" I have made allow me to run the program with larger input for longer time before the program cannot keep running. But it does not solve the fundamental problems, the increasing memory occupation. I have to restart the program every 10k iterations.
Writing this issue, I reverently ask that: 1) Check my "improvement" and help me to solve the problem. 2) Make a document that helps people to use memory efficiently. 3) Pay attention on the increasing memory occupation problem and try to fix it since this is not the first time it is issued.
Thank you very much!