Describe the bug
Current implementation of patching in PieAPP metrics first creates all image patches and then runs inference over them.
In case when input image is very large (say 3000x2000 pixels) and stride is small (e.g. stride=6 as in paper) those image patches eat almost all GPU memory leaving no space for actual model inference.
Related to #252
To Reproduce
metric = piq.PieAPP(stride=6)
x = torch.rand(1, 3, 3000, 2000).cuda()
y = torch.rand(1, 3, 3000, 2000).cuda()
result = metric(x, y)
>>> RuntimeError: CUDA out of memory. Tried to allocate 9.71 GiB (GPU 0; 9.78 GiB total capacity;
537.71 MiB already allocated; 5.91 GiB free; 554.00 MiB reserved in total by PyTorch)
If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.
See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Solution
Easy to implement solution is modify crop_patches function to return an iterator/generator over patches instead of creating all of them at the beginning
Describe the bug Current implementation of patching in PieAPP metrics first creates all image patches and then runs inference over them. In case when input image is very large (say 3000x2000 pixels) and stride is small (e.g. stride=6 as in paper) those image patches eat almost all GPU memory leaving no space for actual model inference.
Related to #252
To Reproduce
Solution Easy to implement solution is modify
crop_patches
function to return an iterator/generator over patches instead of creating all of them at the beginning