ltkong218 / IFRNet

IFRNet: Intermediate Feature Refine Network for Efficient Frame Interpolation (CVPR 2022)
MIT License
259 stars 23 forks source link

Memory accumulation #30

Closed pablodawson closed 1 year ago

pablodawson commented 1 year ago

Hey, thanks for sharing your work, model is great so far.

I'm finding that the inference tends to accumulate memory each run, for example if you run the model in a for loop about 10 times you end up without memory.

for i in range(20): imgt_pred = model.inference(img0_, img8_, embt)

CUDA out of memory. Tried to allocate 48.00 MiB (GPU 0; 8.00 GiB total capacity; 7.19 GiB already allocated; 0 bytes free; 7.33 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF File "C:\Users\Pablo\IFRNet\models\IFRNet.py", line 149, in forward f_out = self.convblock(f_in) File "C:\Users\Pablo\IFRNet\models\IFRNet.py", line 193, in inference out1 = self.decoder1(ft_1_, f0_1, f1_1, up_flow0_2, up_flow1_2) File "C:\Users\Pablo\IFRNet\interpolation.py", line 27, in inference imgt_pred = self.model.inference(img0_, img1_, embt) File "C:\Users\Pablo\IFRNet\interpolation.py", line 47, in <module> inter = model.inference(img0, img1, 3) torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 48.00 MiB (GPU 0; 8.00 GiB total capacity; 7.19 GiB already allocated; 0 bytes free; 7.33 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

I'll see if I can find where it's accumulating.

pablodawson commented 1 year ago

Nevermind, running it with no_grad() fixes it