Closed DamperHa closed 3 years ago
Hi, did you set the option num_features
to 64?
Yes, the setup here is similar to SRFBN in the paper, details are as follows:
"scale": 4, "n_workers": 4, "batch_size": 16, "LR_size": 40, "num_features": 64, "in_channels": 3, "out_channels": 3, "num_steps": 4, "num_groups": 6
We trained our final model with 1000 epochs totally using DIV2K&Flickr2K datasets. Even if we only employed DIV2K to train our model, the best PSNR value of Set5 was 32.39dB. Except the skills mentioned in paper, we didn't use any other tricks to improve performance. I am confused about your mentioned result, because our final model (only with 32 features) could achieve 32.11dB (Set5) after 200 epochs training.
Thank you for your answer. Your work has helped me a lot. I will try other models
I'm sorry that I fail to solve your problem.
Hello, may I ask if there is any training instability when you are training EDSR? When I train to 35 epoch, there will be a hint to skip this batch. So is stable training necessary while training EDSR? Thx -=-
I guess that the initialization method influences the stabilization of training process. Comment this line and try again.
Thanks -=-
Hello, I have a question that I need your help. When I reproduced the model of the paper, the best PSNR of Set5 was 32.13. Could you please tell me whether other skills are useful in training the model? Thx