Describe the bug
Caveat- This may or may not be a bug but please read
I constantly get an out of memory error and cannot find anyway to reduce batch size. I have an RTX 4090 with 24GB of GPU memory. The task manager is saying it is using all of the memory, which is why I say this may not be a bug. However, I am using Ubuntu in WSL2, which could be causing it to not understand how much ram is available to allocate. Hard to debug without being able to change batch sizes.
Code:
from super_image import EdsrModel, ImageLoader
from PIL import Image
from PIL import Image
import requests
import torch
device = torch.device("cuda")
print(torch.cuda.is_available()) #It is True
img = Image.open('./IMG_5195.jpeg') #this is a 2.4MB jpeg
model = EdsrModel.from_pretrained('eugenesiow/edsr-base', scale=2)
model = model.to(device)
inputs = ImageLoader.load_image(img)
preds = model(inputs.to(device))
Describe the bug Caveat- This may or may not be a bug but please read
I constantly get an out of memory error and cannot find anyway to reduce batch size. I have an RTX 4090 with 24GB of GPU memory. The task manager is saying it is using all of the memory, which is why I say this may not be a bug. However, I am using Ubuntu in WSL2, which could be causing it to not understand how much ram is available to allocate. Hard to debug without being able to change batch sizes.
Code:
from super_image import EdsrModel, ImageLoader from PIL import Image from PIL import Image import requests import torch device = torch.device("cuda") print(torch.cuda.is_available()) #It is True
img = Image.open('./IMG_5195.jpeg') #this is a 2.4MB jpeg
model = EdsrModel.from_pretrained('eugenesiow/edsr-base', scale=2) model = model.to(device) inputs = ImageLoader.load_image(img) preds = model(inputs.to(device))
Screenshots
Image 5195 that I'm trying to upscale