IceClear / StableSR

[IJCV2024] Exploiting Diffusion Prior for Real-World Image Super-Resolution
https://iceclear.github.io/projects/stablesr/
Other
2.2k stars 143 forks source link

Replication issue #26

Open ITBeyond1230 opened 1 year ago

ITBeyond1230 commented 1 year ago

Thank you for sharing the code. And I try to train the model from scratch following your train script and config, everything is same except the DIV8k dataset(i don't have DIV8k). By the time I tested it, the model has been trained for 12000 steps( vs your 16500 steps).

The train script is:

python main.py --train --base configs/stableSRNew/v2-finetune_text_T_512.yaml --gpus 0,1,2,3,4,5,6,7 --name StableSR_Replicate --scale_lr False

The test scripts is:

python scripts/sr_val_ddpm_text_T_vqganfin_old.py --config configs/stableSRNew/v2-finetune_text_T_512.yaml --ckpt CKPT_PATH --vqgan_ckpt VQGANCKPT_PATH --init-img INPUT_PATH --outdir OUT_DIR --ddpm_steps 200 --dec_w 0.0 --colorfix_type adain

the input image is : OST_120

the model results i trained: OST_120

your pretrained model results: OST_120 (1)

What makes the difference? Is it training steps or DIV8K dataset? Or other something?

IceClear commented 1 year ago

It is hard to say. For training, usually the longer, the better. After all, the official LDM seems to be trained for about 2.6M iterations with 256 batch size. The performance between different checkpoints can also be different.

ITBeyond1230 commented 1 year ago

@IceClear Thanks for your quick response, I will try to train more steps and then check results. Also, besides longer training steps, what are the key factors that help us get a good model?

ITBeyond1230 commented 1 year ago

It is hard to say. For training, usually the longer, the better. After all, LDM is trained for 2.6M iterations with 256 batch size. The performance between different checkpoints can also be different.

"The performance between different checkpoints can also be different", so why not consider to use the EMA strategy in your practice? LDM seems to use EMA.

IceClear commented 1 year ago

I guess longer training and more data should help.

IceClear commented 1 year ago

It is hard to say. For training, usually the longer, the better. After all, LDM is trained for 2.6M iterations with 256 batch size. The performance between different checkpoints can also be different.

"The performance between different checkpoints can also be different", so why not consider to use the EMA strategy in your practice? LDM seems to use EMA.

I remember that the code uses EMA already?Since we only tune a very small portion of the parameters, I am not sure how much gain can be obtained.

ITBeyond1230 commented 1 year ago

It is hard to say. For training, usually the longer, the better. After all, LDM is trained for 2.6M iterations with 256 batch size. The performance between different checkpoints can also be different.

"The performance between different checkpoints can also be different", so why not consider to use the EMA strategy in your practice? LDM seems to use EMA.

I remember that the code uses EMA already?Since we only tune a very small portion of the parameters, I am not sure how much gain can be obtained.

In config, the use_ema is set to False. Is that means ema is not used in training and test? image

IceClear commented 1 year ago

It is hard to say. For training, usually the longer, the better. After all, LDM is trained for 2.6M iterations with 256 batch size. The performance between different checkpoints can also be different.

"The performance between different checkpoints can also be different", so why not consider to use the EMA strategy in your practice? LDM seems to use EMA.

I remember that the code uses EMA already?Since we only tune a very small portion of the parameters, I am not sure how much gain can be obtained.

In config, the use_ema is set to False. Is that means ema is not used in training and test? image

Oh, my bad. I think I did not add ema support for the training on Stable Diffusion v2. You may have a try if you are interested.

xyIsHere commented 1 year ago

@IceClear Thanks for your quick response, I will try to train more steps and then check results. Also, besides longer training steps, what are the key factors that help us get a good model?

Hi @ITBeyond1230, I think I have the same problem as you. Did you get better results for the first fine-tuning stage?

q935970314 commented 1 year ago

@ITBeyond1230 @xyIsHere I seem to be having the same problem, have you guys had any good results?

xiezheng-cs commented 6 months ago

@ITBeyond1230 @xyIsHere @q935970314 I also seem to be having the same problem, have you guys had any good results? Following the settings in the code, same config, same dataset, same GPU, and I have carefully chosen the trained ckpt and tested all checkpoints, but it is still worse than the public stablesr_000117.ckpt. And I tried training longer but it didn't work, but more blurry! So using ema is work??