yzxing87 / pytorch-deep-video-prior

[NeurIPS 2020] Blind Video Temporal Consistency via Deep Video Prior
118 stars 22 forks source link

CUDA out of memory #4

Closed myname1111 closed 3 years ago

myname1111 commented 3 years ago

When attempting to run main_IRT.py with 6000 frames causes RuntimeError: CUDA out of memory. This could be fixed by reducing batch size or use memory when needed. Also, I`m using google Colab and yes the ram that google collab had offered did not reach the full amount of ram instead, it stopped at 2/3 of it and caused this error

Traceback (most recent call last): File "main_IRT.py", line 139, in net_in = torch.from_numpy(net_in).permute(0,3,1,2).float().to(device) RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 14.73 GiB total capacity; 12.13 GiB already allocated; 5.88 MiB free; 13.79 GiB reserved in total by PyTorch)

yzxing87 commented 3 years ago

Hi, the error may be caused by loading all the 6000 frames into the memory. Previously we did this for accelarating training process. You can also load the frame just before each iteration. I think torch.utils.data.DataLoader can be a better option for this issue. We will update the code using pytorch dataloader. Thx.

SolomGfxcolor commented 3 years ago

When attempting to run main_IRT.py with 6000 frames causes RuntimeError: CUDA out of memory. This could be fixed by reducing batch size or use memory when needed. Also, I`m using google Colab and yes the ram that google collab had offered did not reach the full amount of ram instead, it stopped at 2/3 of it and caused this error

Traceback (most recent call last): File "main_IRT.py", line 139, in net_in = torch.from_numpy(net_in).permute(0,3,1,2).float().to(device) RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 14.73 GiB total capacity; 12.13 GiB already allocated; 5.88 MiB free; 13.79 GiB reserved in total by PyTorch)

Can you send a colab notebook to me please 🙏🏻 My Email :- eslam.alfnnan15@gmail.com

semel1 commented 2 years ago

"You can also load the frame just before each iteration. I think torch.utils.data.DataLoader can be a better option for this issue. We will update the code using pytorch dataloader." Any update? Could you please provide me with more detail on how to " load the frame just before each iteration"