Open platten opened 4 years ago
I think that an image that is bigger than 250x250 px is too big to load to GPU memory with EDVR model. In my case, I made a script that split image and runs EDVR after split image. This is my script. I hope this helps you. https://gist.github.com/ryul99/a192310cf2cd3ce94f83b928d44a141f
These folks used 8 Titan GPUs to train their model. I'm not sure if a single 2080Ti would be able to rival that. As @ryul99 pointed out, you would have to use some technique to reduce the data you're loading into memory.
Hello,
I have an RTX 2080Ti with 11GB of memory, yet for some reason, I keep running out of memory when trying to do an upscale using
EDVR_util.single_forward
on any image larger than 250x250 px. The pre-trained model I am using isEDVR_Vimeo90K_SR_L.pth
and I am configuring the model in the following way:EDVR_model = EDVR_arch.EDVR(128, 7, 8, 5, 40, predeblur=False, HR_in=False)
Any ideas? Thanks!