csjliang / DASR

Official implementation of the paper 'Efficient and Degradation-Adaptive Network for Real-World Image Super-Resolution' in ECCV 2022
Apache License 2.0
128 stars 9 forks source link

why the training is not convergence #2

Closed Lvhhhh closed 2 years ago

Lvhhhh commented 2 years ago

i use the train_DASR.yml as you offered, just change two place. 1.training samples is DIV2K. 2.pretrain_network_g is none. and it trained from random init. then i found all of the losses are nan. should i trained it using pretrained model?

Lvhhhh commented 2 years ago

if i delete the degree_list and reduce the random range of whole degrading process. can it easier converge?

csjliang commented 2 years ago

Hi, Thanks for your attention to our paper. Due to the instability of adversarial training, the model may be hard to converge if you train it from scratch. It may also happen with other existing methods. You should use a pre-trained model as in our setting, or you can firstly train the model using pixel-wise loss and then add the perceptual and adversarial losses to optimize the perceptual quality. Reducing the degradation space may ease the training but can sacrifice the generalization capacity.

Lvhhhh commented 2 years ago

hi @csjliang thank you for your time. i have another confusion if i want to manually increasing and decreasing the level of noise or deblur. how to do that. change the weights(a mentioned in paper which dimension is 5) ? or change the v dimension is 33?

csjliang commented 2 years ago

Hi,

Thanks for your question. You need to manually change the v (dimension 33) as it denotes the degradation parameters which is interpretable. You need to modify the desired operations such as noise or blur by changing the values in corresponding dimensions of v. You can find the relationship in DASR_model.py. Thanks.