HVision-NKU / SRFormer

Official code for "SRFormer: Permuted Self-Attention for Single Image Super-Resolution" (ICCV 2023) and SRFormerV2
https://openaccess.thecvf.com/content/ICCV2023/papers/Zhou_SRFormer_Permuted_Self-Attention_for_Single_Image_Super-Resolution_ICCV_2023_paper.pdf
Other
220 stars 21 forks source link

Question about training machine. #16

Closed mls1999725 closed 1 year ago

mls1999725 commented 1 year ago

Thanks for your excellent work! I would like to ask how many GPUs are used for training SRFormer and SwinIR? What type of GPUs are they? Looking forward to your reply.

Z-YuPeng commented 1 year ago

Hi! For the official setting, you may require two 3090 GPUs to train SRFormer or SwinIR. You can decrease the batch size, window size, patch size, and other hyper-parameters to train it on less GPU. I encourage you to try to configure these hyper-parameters!

Z-YuPeng commented 1 year ago

We will close the issue since it has been inactive for a long time. Feel free to reopen it when you need.