MCG-NKU / AMT

Official code for "AMT: All-Pairs Multi-Field Transforms for Efficient Frame Interpolation" (CVPR2023)
https://nk-cs-zzl.github.io/projects/amt/index.html
Other
214 stars 18 forks source link

torch.cuda.OutOfMemoryError: CUDA out of memory #6

Open goometasoft opened 1 year ago

goometasoft commented 1 year ago

thank you very much for your AMT !!!

when i run (python demos/demo_2x.py ...) , get error torch.cuda.OutOfMemoryError: CUDA out of memory

Loading [images] from [['image\\panda_0.png', 'image\\panda_1.png']], the number of images = [2]
anchor_resolution 67108864
Loading [networks.AMT-G.Model] from [pretrained/amt-g.pth]...
Start frame interpolation:
Iter 1. input_frames=2 output_frames=3
Traceback (most recent call last):
  File "D:\Software\AI\AMT\2304\amt_1\demos\demo_2x.py", line 190, in <module>
    imgt_pred = model(in_0, in_1, embt, scale_factor=scale, eval=True)['imgt_pred']
  File "D:\Program\conda\envs\py39\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\Software\AI\AMT\2304\amt_1\.\networks\AMT-G.py", line 86, in forward
    corr_fn = BidirCorrBlock(fmap0, fmap1, radius=self.radius, num_levels=self.corr_levels)
  File "D:\Software\AI\AMT\2304\amt_1\.\networks\blocks\raft.py", line 161, in __init__
    corr_T = F.avg_pool2d(corr_T, 2, stride=2)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 2.46 GiB (GPU 0; 8.00 GiB total capacity; 6.48 GiB already allocated; 0 bytes free; 6.50 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Paper99 commented 1 year ago

Hello, what is the size of the image?

goometasoft commented 1 year ago

my image is 2560x1440 , can AMT only process 1280 x 720 ?

NK-CS-ZZL commented 1 year ago

You can enlarge the anchor_memory to fit the GPU with low vram capacity.

Vio1etovo commented 11 months ago

You can enlarge the anchor_memory to fit the GPU with low vram capacity.

what rules are used to set anchor_memory ?