Open YahirSalahu opened 1 year ago
This error means that you don't have enough GPU memory to do what you want (you have 4 GiB from which the OS and other apps probably take some, too).
As far as I understand this implementation needs GPU memory for three things:
Yoiu can try to use tiling instead of upscaling the image at once (that's the default), maybe start with
> python inference_realesrgan.py --model_path experiments/pretrained_models/RealESRGAN_x4plus.pth --input inputs --tile 64
As tiling makes the whole process slower and can create seams you might want to start with a single image instead, though.
Details on how to use tiling: https://github.com/xinntao/Real-ESRGAN/tree/master#usage-of-python-script
I have implemented a local padding technique that super-resolved large inputs without introducing seaming artifacts, https://github.com/Alhasan-Abdellatif/Real-ESRGAN-lp
Im trying to rescale a picture and im getting this message: I have a GTX 1650 Super 4vram 32gb ram
(base) G:\Ai\Real-ESRGAN\Real-ESRGAN-Windows>python inference_realesrgan.py --model_path experiments/pretrained_models/RealESRGAN_x4plus.pth --input inputs Testing 0 PXL_20220927_222952679 Error CUDA out of memory. Tried to allocate 1.12 GiB (GPU 0; 4.00 GiB total capacity; 2.41 GiB already allocated; 0 bytes free; 2.41 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF Traceback (most recent call last): File "G:\Ai\Real-ESRGAN\Real-ESRGAN-Windows\inference_realesrgan.py", line 243, in
main()
File "G:\Ai\Real-ESRGAN\Real-ESRGAN-Windows\inference_realesrgan.py", line 77, in main
output_img = upsampler.post_process()
File "G:\Ai\Real-ESRGAN\Real-ESRGAN-Windows\inference_realesrgan.py", line 239, in post_process
return self.output
AttributeError: 'RealESRGANer' object has no attribute 'output'