Open aliencaocao opened 3 months ago
I tried this batch inference, but after testing, it turned out to be slower than inferencing a single image. Is there something wrong? @aliencaocao
It could be cpu bound because the implementation is essentially a lot of for loops for np arrays. It is surely not as optimized as it could be. The perf improvement is more for the case where you have a very large batch size like 100+, then the time taken to launch GPU kernels will override the extra overhead by the for loops
Allow user to pass in a list of np arrays to
enhance
method like this:Note due to constraints of the model and its padding usage, only images of same height and width (shape) is supported. You can resize it to same before hand.
This can come useful in fast video inference where all frames are of same size.
Fixes https://github.com/xinntao/Real-ESRGAN/issues/634