A tensorflow implementation of "Fast and Accurate Image Super Resolution by Deep CNN with Skip Connection and Network in Network", a deep learning based Single-Image Super-Resolution (SISR) model.
Just wondering if there is a way to reduce memory usage on the GPU without scaling down the model or the image. On a 16Gb GPU i can get an image a little over 1080p.
Just wondering if there is a way to reduce memory usage on the GPU without scaling down the model or the image. On a 16Gb GPU i can get an image a little over 1080p.