Scalsol / mega.pytorch

Memory Enhanced Global-Local Aggregation for Video Object Detection, CVPR2020
Other
565 stars 115 forks source link

RunTimeError: CUDA out of memory #32

Closed wasif1508 closed 4 years ago

wasif1508 commented 4 years ago

Hi, I have a NVIDIA-1050Ti (4 GB) GPU. I was running the demo script. It worked perfectly fine for single-frame-baseline however for the "mega" method I am getting the error CUDA out of memory. Tried to allocate 396.00 MiB (GPU 0; 3.95 GiB total capacity; 1.89 GiB already allocated; 384.75 MiB free; 650.19 MiB cached) Are there any parameters in the configuration files that could be modified to make it work? Any help would be appreciated. Thanks in advance.

Scalsol commented 4 years ago

All configs related to "mega" are defined here under prefix _C.MODEL.VID.MEGA. For your case, you may need to reduce the size of the memory by setting _C.MODEL.VID.MEGA.MEMORY.SIZE to a smaller number or disable memory by setting _C.MODEL.VID.MEGA.MEMORY.ENABLE to False. The memory and the relation calculation consumes a lot of memory so with these modification the memory issue may be resolved. But also the performance will degrade. If the memory is still not enough, maybe you could try to modify other configs like _C.MODEL.VID.MEGA.MIN_OFFSET, _C.MODEL.VID.MEGA.MAX_OFFSET. But make sure that _C.MODEL.VID.MEGA.ALL_FRAME_INTERVAL equals to _C.MODEL.VID.MEGA.MAX_OFFSET - _C.MODEL.VID.MEGA.MIN_OFFSET+1, and _C.MODEL.VID.MEGA.KEY_FRAME_LOCATION = -_C.MODEL.VID.MEGA.MIN_OFFSET. Or just use a smaller model.

Hope this helps.

wasif1508 commented 4 years ago

@Scalsol Thanks for help, it works