Open ITBeyond1230 opened 1 year ago
It is hard to say. For training, usually the longer, the better. After all, the official LDM seems to be trained for about 2.6M iterations with 256 batch size. The performance between different checkpoints can also be different.
@IceClear Thanks for your quick response, I will try to train more steps and then check results. Also, besides longer training steps, what are the key factors that help us get a good model?
It is hard to say. For training, usually the longer, the better. After all, LDM is trained for 2.6M iterations with 256 batch size. The performance between different checkpoints can also be different.
"The performance between different checkpoints can also be different", so why not consider to use the EMA strategy in your practice? LDM seems to use EMA.
I guess longer training and more data should help.
It is hard to say. For training, usually the longer, the better. After all, LDM is trained for 2.6M iterations with 256 batch size. The performance between different checkpoints can also be different.
"The performance between different checkpoints can also be different", so why not consider to use the EMA strategy in your practice? LDM seems to use EMA.
I remember that the code uses EMA already?Since we only tune a very small portion of the parameters, I am not sure how much gain can be obtained.
It is hard to say. For training, usually the longer, the better. After all, LDM is trained for 2.6M iterations with 256 batch size. The performance between different checkpoints can also be different.
"The performance between different checkpoints can also be different", so why not consider to use the EMA strategy in your practice? LDM seems to use EMA.
I remember that the code uses EMA already?Since we only tune a very small portion of the parameters, I am not sure how much gain can be obtained.
In config, the use_ema is set to False. Is that means ema is not used in training and test?
It is hard to say. For training, usually the longer, the better. After all, LDM is trained for 2.6M iterations with 256 batch size. The performance between different checkpoints can also be different.
"The performance between different checkpoints can also be different", so why not consider to use the EMA strategy in your practice? LDM seems to use EMA.
I remember that the code uses EMA already?Since we only tune a very small portion of the parameters, I am not sure how much gain can be obtained.
In config, the use_ema is set to False. Is that means ema is not used in training and test?
Oh, my bad. I think I did not add ema support for the training on Stable Diffusion v2. You may have a try if you are interested.
@IceClear Thanks for your quick response, I will try to train more steps and then check results. Also, besides longer training steps, what are the key factors that help us get a good model?
Hi @ITBeyond1230, I think I have the same problem as you. Did you get better results for the first fine-tuning stage?
@ITBeyond1230 @xyIsHere I seem to be having the same problem, have you guys had any good results?
@ITBeyond1230 @xyIsHere @q935970314 I also seem to be having the same problem, have you guys had any good results? Following the settings in the code, same config, same dataset, same GPU, and I have carefully chosen the trained ckpt and tested all checkpoints, but it is still worse than the public stablesr_000117.ckpt. And I tried training longer but it didn't work, but more blurry! So using ema is work??
Thank you for sharing the code. And I try to train the model from scratch following your train script and config, everything is same except the DIV8k dataset(i don't have DIV8k). By the time I tested it, the model has been trained for 12000 steps( vs your 16500 steps).
The train script is:
python main.py --train --base configs/stableSRNew/v2-finetune_text_T_512.yaml --gpus 0,1,2,3,4,5,6,7 --name StableSR_Replicate --scale_lr False
The test scripts is:
python scripts/sr_val_ddpm_text_T_vqganfin_old.py --config configs/stableSRNew/v2-finetune_text_T_512.yaml --ckpt CKPT_PATH --vqgan_ckpt VQGANCKPT_PATH --init-img INPUT_PATH --outdir OUT_DIR --ddpm_steps 200 --dec_w 0.0 --colorfix_type adain
the input image is :
the model results i trained:
your pretrained model results:
What makes the difference? Is it training steps or DIV8K dataset? Or other something?