kanezaki / pytorch-unsupervised-segmentation-tip

MIT License
257 stars 58 forks source link

RuntimeError: CUDA out of memory. Tried to allocate 60.00 MiB (GPU 0; 3.95 GiB total capacity; 708.35 MiB already allocated; 111.00 MiB free; 742.00 MiB reserved in total by PyTorch) #8

Open OmarHedeya95 opened 3 years ago

OmarHedeya95 commented 3 years ago

Hello, I always get the following error when running the demo code multiple times even if I am using a very tiny image. I think maybe the cache is not emptied or something? I am not really sure and would appreciate your help. Thank you

Error Message:

Traceback (most recent call last): File "demo.py", line 127, in lhpy = loss_hpy(HPy,HPy_target) File "/home/omar/anaconda3/envs/dlr/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in call result = self.forward(*input, **kwargs) File "/home/omar/anaconda3/envs/dlr/lib/python3.7/site-packages/torch/nn/modules/loss.py", line 88, in forward return F.l1_loss(input, target, reduction=self.reduction) File "/home/omar/anaconda3/envs/dlr/lib/python3.7/site-packages/torch/nn/functional.py", line 2191, in l1_loss ret = torch._C._nn.l1_loss(expanded_input, expanded_target, _Reduction.get_enum(reduction)) RuntimeError: CUDA out of memory. Tried to allocate 60.00 MiB (GPU 0; 3.95 GiB total capacity; 708.35 MiB already allocated; 111.00 MiB free; 742.00 MiB reserved in total by PyTorch)

DmitrySavchuk commented 3 years ago

Same issue

hansenmaster commented 2 years ago

I had the same problem and resolved it by downsize the resolution of target image. The example image size is only <500x500 pixels.