xinntao / Real-ESRGAN

Real-ESRGAN aims at developing Practical Algorithms for General Image/Video Restoration.
BSD 3-Clause "New" or "Revised" License
27.64k stars 3.47k forks source link

Results are different #446

Closed sukruburakcetin closed 1 year ago

sukruburakcetin commented 1 year ago

The result that I derived from Gradio Web Demo(https://huggingface.co/spaces/akhaliq/Real-ESRGAN) is different than the result that I got when I executed the code on GitHub(https://github.com/xinntao/Real-ESRGAN).

I used the base model type which was RealESRGAN_x4plus(not the anime one) when I executed the inference function, the web demo use the same model with the same version(v0.1.0).

I also download the code on huggingface and run the same inference function, still got the same output with this one.

This one is the original input: original_input

This one is when I clone the repository and run the code with model RealESRGAN_x4plus: GitHub_result

This is the result from the hugging face gradio web demo when I choose the base method(which uses RealESRGAN_x4plus as far as I am concerned): huggingface_result

I think the result that I got when I use the Gradio web demo is far superior to what I got here(Vulkan execution yielded the same output). Colab demo, the web demo, and the GitHub clone demo version [Real-ESRGAN V0.2.5.0] provide the same results which are shown in the second picture. I tried to contact the owner but I couldn't get a response yet. I just wanted to know what is the difference behind them while patiently waiting for an answer from the owner.

Thank you so much.

prakhar625 commented 1 year ago

Can confirm, I am facing the same issue at my end.

sukruburakcetin commented 1 year ago

@prakhar625 Thank you so much for backing me up by applying the sample pictures to your current regime.

Okay, I figured out the reason why we have two different outputs for different runs.

Resizing

...
basewidth = 256
wpercent = (basewidth/float(img.size[0]))
hsize = int((float(img.size[1])*float(wpercent)))
img = img.resize((basewidth,hsize), Image.ANTIALIAS)

The code block that is mentioned above simply changes the dimension of the given image before the algorithm simply runs. It also saves the image that will be processed in antialised form. When I tweak the code on my desktop that was cloned from the huggingface portal, I could able to run and got the superior result as desired.