HVision-NKU / SRFormer

Official code for "SRFormer: Permuted Self-Attention for Single Image Super-Resolution" (ICCV 2023) and SRFormerV2
https://openaccess.thecvf.com/content/ICCV2023/papers/Zhou_SRFormer_Permuted_Self-Attention_for_Single_Image_Super-Resolution_ICCV_2023_paper.pdf
Other
220 stars 21 forks source link

Output is not sharp #6

Closed harpavatkeerti closed 1 year ago

harpavatkeerti commented 1 year ago

Hi, very interesting work, I wonder if a slightly modified architecture can be used for image deblurring also. I tried the following modified model for deblurring 128x128 images

network_g:
  type: SRFormer
  upscale: 1
  in_chans: 3
  img_size: 128
  window_size: 16
  img_range: 1.
  depths: [6, 6, 6, 6]
  embed_dim: 60
  num_heads: [6, 6, 6, 6]
  mlp_ratio: 2
  # upsampler: None
  resi_connection: '1conv'

with the simple loss function

l1_loss = torch.nn.L1Loss(reduction='mean')

The outputs which I am getting are decent, but it is not able to produce sharp edges and corners. It seems to be a smooth output, closer to some oil painting. Can you please tell me if there is something wrong with this method?

I had a couple of other doubts about the model architecture:

Thanks a lot!!!

Z-YuPeng commented 1 year ago

I am very sorry for later reply.

I didn't understand the img_size parameter. I am able to pass any sized image as input and the model gives the output without any error.

You are right, img_size parameter is only used to initial the attention mask before training, if you pass any sized image as input different with img_size, the mask will recomputed without any error.

Z-YuPeng commented 1 year ago

Are you not using cross-window attention as SwinIR does through shifted windows?

We also use shifted windows, you can find at here.

Z-YuPeng commented 1 year ago

The outputs which I am getting are decent, but it is not able to produce sharp edges and corners. It seems to be a smooth output, closer to some oil painting. Can you please tell me if there is something wrong with this method?

If your code is implemented correctly, this may be a question worth exploring, we haven't tested our approach on deblur so may not be able to answer for this reason.

Z-YuPeng commented 1 year ago

I didn't check the issue before because of some things, thank you for your attention to our work!

harpavatkeerti commented 1 year ago

No issues, thanks a lot for the replies.