cuiziteng / Illumination-Adaptive-Transformer

🌕 [BMVC 2022] You Only Need 90K Parameters to Adapt Light: A Light Weight Transformer for Image Enhancement and Exposure Correction. SOTA for low light enhancement, 0.004 seconds try this for pre-processing.
Apache License 2.0
477 stars 44 forks source link

CUDA out of memory #57

Closed Ahmad-Hammoudeh closed 1 year ago

Ahmad-Hammoudeh commented 1 year ago

Thanks for your awesome work, it gives really nice results!!

I faced a problem when running the demo on an image with resolution 1920x1080 or passing several images (about 5 images) of size (256, 128) to the model:

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 508.00 MiB (GPU 0; 6.00 GiB total capacity; 4.11 GiB already allocated; 383.94 MiB free; 4.15 GiB
 reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Managem
ent and PYTORCH_CUDA_ALLOC_CONF

and this is the traceback:

Traceback (most recent call last):
  File "path-to\img_demo.py", line 61, in <module>
    _, _ ,enhanced_img = model(input)
                         ^^^^^^^^^^^^
  File "path-to\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "path-to\IAT_enhance\model\IAT_main.py", line 124, in forward
    mul, add = self.local_net(img_low)
               ^^^^^^^^^^^^^^^^^^^^^^^
  File "path-to\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "path-to\IAT_enhance\model\IAT_main.py", line 85, in forward
    mul = self.mul_blocks(img1) + img1
          ^^^^^^^^^^^^^^^^^^^^^
  File "path-to\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "path-to\site-packages\torch\nn\modules\container.py", line 217, in forward
    input = module(input)
            ^^^^^^^^^^^^^
  File "path-to\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "path-to\site-packages\torch\nn\modules\activation.py", line 685, in forward
    return F.gelu(input, approximate=self.approximate)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

I'm using GTX 1060. I tried several values of 'max_split_size_mb ' with no luck. My question is, is it normal to run out of GPU memory when running the code?

Thanks in advance

cuiziteng commented 1 year ago

May be you could down-scale the image size or use "torch.utils.checkpoint" to reduce memory cost.

Ahmad-Hammoudeh commented 1 year ago

Thanks for your response. I managed to solve the problem using (with torch.no_grad()) which reduced the memory consumption a lot.