HVision-NKU / SRFormer

Official code for "SRFormer: Permuted Self-Attention for Single Image Super-Resolution" (ICCV 2023)
https://openaccess.thecvf.com/content/ICCV2023/papers/Zhou_SRFormer_Permuted_Self-Attention_for_Single_Image_Super-Resolution_ICCV_2023_paper.pdf
Other
198 stars 18 forks source link

SRformer based on Real-ESRGAN? #29

Open AIisCool opened 6 months ago

AIisCool commented 6 months ago

https://github.com/HVision-NKU/SRFormer/issues/19#issuecomment-1779510258

Is this still coming at some point?

Z-YuPeng commented 3 months ago

Sorry for the late, I trained SRFormer on Real-ESRGAN earlier, but have not yet had the opportunity to verify its correctness due to being preoccupied with another project. Today, I briefly checked this model. If there are no issues, I plan to update the repo within a week.

zelenooki87 commented 3 months ago

@Z-YuPeng Any news?

Z-YuPeng commented 3 months ago

VERY SORRY for the delay! I updated a version and thanks for your continued interest in my work. And, we are about to launch SRFormer V2.

cyy2427 commented 3 months ago

Hi, thank u @Z-YuPeng for sharing the real-world SR version checkpoint. However, I found error when loading state_dict for SRFormer with your newly uploaded checkpoint and yaml config file. The error message is as follows.

RuntimeError: Error(s) in loading state_dict for SRFormer: size mismatch for layers.0.residual_group.blocks.1.attn_mask: copying a param with shape torch.Size([9, 576, 144]) from checkpoint, the shape in current model is torch.Size([4, 576, 144]). size mismatch for layers.0.residual_group.blocks.3.attn_mask: copying a param with shape torch.Size([9, 576, 144]) from checkpoint, the shape in current model is torch.Size([4, 576, 144]). size mismatch for layers.0.residual_group.blocks.5.attn_mask: copying a param with shape torch.Size([9, 576, 144]) from checkpoint, the shape in current model is torch.Size([4, 576, 144]). size mismatch for layers.1.residual_group.blocks.1.attn_mask: copying a param with shape torch.Size([9, 576, 144]) from checkpoint, the shape in current model is torch.Size([4, 576, 144]). size mismatch for layers.1.residual_group.blocks.3.attn_mask: copying a param with shape torch.Size([9, 576, 144]) from checkpoint, the shape in current model is torch.Size([4, 576, 144]). size mismatch for layers.1.residual_group.blocks.5.attn_mask: copying a param with shape torch.Size([9, 576, 144]) from checkpoint, the shape in current model is torch.Size([4, 576, 144]). size mismatch for layers.2.residual_group.blocks.1.attn_mask: copying a param with shape torch.Size([9, 576, 144]) from checkpoint, the shape in current model is torch.Size([4, 576, 144]). size mismatch for layers.2.residual_group.blocks.3.attn_mask: copying a param with shape torch.Size([9, 576, 144]) from checkpoint, the shape in current model is torch.Size([4, 576, 144]). size mismatch for layers.2.residual_group.blocks.5.attn_mask: copying a param with shape torch.Size([9, 576, 144]) from checkpoint, the shape in current model is torch.Size([4, 576, 144]). size mismatch for layers.3.residual_group.blocks.1.attn_mask: copying a param with shape torch.Size([9, 576, 144]) from checkpoint, the shape in current model is torch.Size([4, 576, 144]). size mismatch for layers.3.residual_group.blocks.3.attn_mask: copying a param with shape torch.Size([9, 576, 144]) from checkpoint, the shape in current model is torch.Size([4, 576, 144]). size mismatch for layers.3.residual_group.blocks.5.attn_mask: copying a param with shape torch.Size([9, 576, 144]) from checkpoint, the shape in current model is torch.Size([4, 576, 144]). size mismatch for layers.4.residual_group.blocks.1.attn_mask: copying a param with shape torch.Size([9, 576, 144]) from checkpoint, the shape in current model is torch.Size([4, 576, 144]). size mismatch for layers.4.residual_group.blocks.3.attn_mask: copying a param with shape torch.Size([9, 576, 144]) from checkpoint, the shape in current model is torch.Size([4, 576, 144]). size mismatch for layers.4.residual_group.blocks.5.attn_mask: copying a param with shape torch.Size([9, 576, 144]) from checkpoint, the shape in current model is torch.Size([4, 576, 144]). size mismatch for layers.5.residual_group.blocks.1.attn_mask: copying a param with shape torch.Size([9, 576, 144]) from checkpoint, the shape in current model is torch.Size([4, 576, 144]). size mismatch for layers.5.residual_group.blocks.3.attn_mask: copying a param with shape torch.Size([9, 576, 144]) from checkpoint, the shape in current model is torch.Size([4, 576, 144]). size mismatch for layers.5.residual_group.blocks.5.attn_mask: copying a param with shape torch.Size([9, 576, 144]) from checkpoint, the shape in current model is torch.Size([4, 576, 144]).

My command using infer_sr.py for inference was as follows. python basicsr/infer_sr.py -opt options/test/SRFormer/test_SRFormer-S_x4_real.yml --input_dir ../datasets/RealSRSet+5images/ --output_dir ./out/

Z-YuPeng commented 2 months ago

Hi, @cyy2427 thanks for your quick reply, I have fixed it and uploaded a new weight of SRFormer-S_x4_real, please redownload SRFormer-S_x4_real.pth and run git pull of this repo.

zelenooki87 commented 2 months ago

@Z-YuPeng Can you upload the pre-change parameter version? In Chainner, that version worked normally for me compared to this corrected link. Thanks.

Z-YuPeng commented 2 months ago

Hi, @zelenooki87 , I have reuploaded it! Thanks for your suggestion!

AIisCool commented 2 months ago

@Z-YuPeng Excited to hear about SRFormer V2! Is there an ETA?

zelenooki87 commented 2 months ago

Hi, @zelenooki87 , I have reuploaded it! Thanks for your suggestion!

Thank you so much for this reupload, I converted the model to onnx and in Selur's Hybrid with the help of the vsmlrt module it works phenomenally for upscaling video clips. I am thrilled. It is also good for photos, of course. Can't wait for SRFORMER v2. Greetings.

lyra-white commented 3 weeks ago

@Z-YuPeng Quick question - can't find the "SRFormer-S_x4_real.pth" in the google drive link. Is it removed for some reason?