Open Raven888888 opened 2 years ago
Thank you for finding the bug in our code, we have updated it.
The problem you are having is because you are using the incorrect size of the input image. The CelebA model (X4) uses an input size of 44x54 (because the size of the high resolution image is 176x216). So you need to resize the image to 44x54 before using it as input to the model to get the desired result. Similarly, if you want to use the FFHQ model you need to resize the input image to 64x64.
I hope this helps you. If you need to test a custom dataset it is best to retrain the model (using your dataset distribution).
Nope unfortunately resizing does not help.
This is how I run the script.
python test.py --model SISN --pretrain CelebA_X4.pt --dataset_root paper --save_root CelebA/paper
I have again modified test.py
to include resizing as you suggested:
LR = color.gray2rgb(io.imread(path))
to_size = (44, 54) if opt.pretrain.split('_')[0].upper() == 'CELEBA' else (64, 64)
LR = cv2.resize(LR, to_size)
print(LR.shape)
LR = im2tensor(LR).unsqueeze(0).to(dev)
LR = F.interpolate(LR, scale_factor=opt.scale, mode="nearest")
FFHQ model
CelebA model
Claimed in paper
I am more inclined to believe there is some mistake in the code/model weight, your paper results look impressive. Can you please check? @mdswyz
Cheers!
I have retested our code (downloaded directly from my github) and found that there is no such case as yours, I have provided the image of my input (what you have shown is the result after inference by using CelebA_X8.pt in our paper), you can try it.
Btw, the images are obtained on MATLAB (just like the general SR datasets)
Thanks @mdswyz
Managed to reproduce the results in the paper. A few things I learned along the way:
CelebA_X8.pt
only works on a 22x27 image (not any higher) to produce 176x216 output. If I use CelebA_X4.pt
on 44x54 image with scale x4, it does not produce good output (see above).Overall, very impressive work in pushing the boundaries. Cheers
I look carefully at your results above and find a problem, your LR image has a size of 214x259, but the quality of the image is very blurry (not sure how you acquired it, the texture information of this image is severely lost), in this case it should be a deblurring problem rather than a SR problem. So when you downsample the image (44x54) the quality of the image you get will be extremely poor (because your high resolution image is just blurry), which directly affects the reconstruction results.
@mdswyz
However, when I downsample my acquired image (LR 214x259) to 22x27 and used CelebA_X8.pt
, it works as good as the image you provide (LR 22x27). So I doubt is the input issue since it works on X8 scale.
The problem is with X4 scale where it does not work.
@mdswyz However, when I downsample my acquired image (LR 214x259) to 22x27 and used
CelebA_X8.pt
, it works as good as the image you provide (LR 22x27). So I doubt is the input issue since it works on X8 scale. The problem is with X4 scale where it does not work.
did you solve this problem?
@lynshwoo2022 Unfortunately no, I have abandoned this project.
@Raven888888 so x8 can't handle img larger than 22x27, and x4 can't handle even 44x54?
Got the pretrained model checkpoints from here.
Tried running
test.py
of both models (X4) on the following images as shown in the paper. I had to change the following line, elsenet.forward()
will not work.Original LR input
FFHQ model
CelebA model
This is far from what's shown on the paper. There is barely any difference in the SR output images for both models. Please advise, thanks.