allenai / satlas-super-resolution

Apache License 2.0
176 stars 23 forks source link

Testing on small_val_set gives same super res for all input images #33

Closed sharmaine1028 closed 1 week ago

sharmaine1028 commented 1 month ago

Hello, I was trying out the different commands with the pretrained weights (pretrain_network_g: esrgan_16S2.pth and no weights for the pretrain_network_d) on the small validation set, specifically the one to evaluate the model on a validation or test set, when ground truth high-res images are available. I was using this python -m ssr.test -opt ssr/options/esrgan_s2naip_urban.yml

However, I am getting the same super res image for all of the input, which differs greatly from the ground truth. I have placed some screenshots below.

Example of the ground truth data image My output regardless of the input image image

I was wondering if I am missing some important configuration, or what is the reason behind me getting the same super res image? Your help would be greatly appreciated.

piperwolters commented 1 month ago

Hm, I just tested the esrgan_16S2.pth weights and am getting normal results: Target: 0_testing_gt SR: 0_testing

I ran: python -m ssr.test -opt ssr/options/testing.yml where testing.yml looks like this (with my filepaths): testing.yml.zip

Can you confirm that you downloaded the right weights and changed the pretrain_network_g path to point to those weights?

sharmaine1028 commented 1 month ago

Hi Piper, thank you so much for your quick reply and help!

I am using the model weight that is provided in the README page. May I ask if I have to perform some training on the model in order for it to work? I was initially planning to test it out using the pretrained weights. I have tried using the configuration you provided above and just changing the file paths accordingly for the datasets and pretrain_network_g but I still get the same output result.

Here is the config file I am using testing.zip and the model weight I am using from the README page 16-S2-images

piperwolters commented 1 month ago

You should not have to train these models to get the quality that I showed in the above example. I just double checked that the weights I used to get that output matched the weights on the website and they are the exact same file. Can you confirm that the weights are being loaded into the model?

And do you get the same result when using one of the other models (ex. 8S2 model)?

sharmaine1028 commented 1 month ago

Hi Piper,

Thank you so much for your help. I have tried with the 8S2 model, and I get the same output images. I believe my file paths are correct since the inference using this command works as expected with the same file paths: python -m ssr.infer -opt ssr/options/infer_example.yml

Additionally, if I comment out the pretrain_network_g, I get a completely black image instead of the current output. I assume this means the model weights are being loaded?

I'm not really sure why I am getting the same output regardless of input when testing against the ground truth. For my project, I am only using the inference images, so testing against the ground truth is not as important, but I wanted to try it out for comparison.

piperwolters commented 3 weeks ago

Hi!

I am unsure why this is happening for you. So running the inference script, with the same weights, gives you good results, but running the test script is resulting in the same image every time?