sunny2109 / SAFMN

[ICCV 2023] Spatially-Adaptive Feature Modulation for Efficient Image Super-Resolution; runner-up method for the model complexity track in NTIRE2023 Efficient SR challenge
265 stars 14 forks source link

OpenXLab model details #14

Open Feynman1999 opened 1 year ago

Feynman1999 commented 1 year ago

whhich model do you upload to OpenXlab? can you share the training details?

sunny2109 commented 1 year ago

Hi, thank you for your interest. We provide x2 and x4 pre-trained models for the OpenXlab demo.

For the training detail, we adopt a multi-stage strategy, which uses different loss functions and learning rate in each stage. The main used loss functions include L1 Loss, FFT Loss, Perceptual Loss, LDL, and Cosine Similarity Loss.

Feynman1999 commented 1 year ago

Hi, thank you for your interest. We provide x2 and x4 pre-trained models for the OpenXlab demo.

For the training detail, we adopt a multi-stage strategy, which uses different loss functions and learning rate in each stage. The main used loss functions include L1 Loss, FFT Loss, Perceptual Loss, LDL, and Cosine Similarity Loss.

thanks, the training config is not in the repo, right?

sunny2109 commented 1 year ago

Hi, thank you for your interest. We provide x2 and x4 pre-trained models for the OpenXlab demo. For the training detail, we adopt a multi-stage strategy, which uses different loss functions and learning rate in each stage. The main used loss functions include L1 Loss, FFT Loss, Perceptual Loss, LDL, and Cosine Similarity Loss.

thanks, the training config is not in the repo, right?

Yeah, the whole training procedure is similar to the Real-ESRGAN and you can refer to it for implementation.

Feynman1999 commented 1 year ago

Hi, thank you for your interest. We provide x2 and x4 pre-trained models for the OpenXlab demo. For the training detail, we adopt a multi-stage strategy, which uses different loss functions and learning rate in each stage. The main used loss functions include L1 Loss, FFT Loss, Perceptual Loss, LDL, and Cosine Similarity Loss.

thanks, the training config is not in the repo, right?

Yeah, the whole training procedure is similar to the Real-ESRGAN and you can refer to it for implementation.

ok, you use the same config (e.g. kernel,resize, blur probability) as Real-ESRGAN? I want to retrain by myself

harusaju commented 9 months ago

Hi, Could you provide the training confs for the SAFMN_Real_x4.pth model ? I would like to train the model on real world data.