Closed ArfaSaif closed 2 years ago
VRT does consume a lot of memory. Maybe you can reduce the channel and head numbers at the expense of reduced performance. Or you can refer to our improved version: https://github.com/JingyunLiang/RVRT
Hi, have you solved this 2x upscale issue ?
I was wondering if the authors have any suggestions for finetuning the VRT model to do a 2x upscale instead of a 4x upscale. I removed some layers from the Upsample module to support 2x upscale, however the forward/backward pass is consuming too much VRAM. Which layers do you suggest to remove from the model to reduce the model complexity and also achieve good results for a 2x upscale?
Currently, I have tried 2x upscale training with 1 GPU, batch size =1, low quality frames crop size = 64x64, and high quality frames crop size = 128x128. The maximum VRAM usage in the forward pass/backward pass is 23GB.