Closed Darrenxc closed 3 years ago
For comparison, this is the original picture for the second one
Maybe you should download these two pics to see the differences. :(
Firstly, the checkboards occurr in the early training stage because the final results is the summary of multi-scale outputs, you may check the following codes: https://github.com/chaofengc/PSFRGAN/blob/5fc0e1f616b5b87ebdd96f450c53b60d4a3c5c4b/models/psfrnet.py#L128-L130
As for the artifacts in your results, I did not notice that before because the visual difference is small. I have no idea why it happens. You may try some ablation studies to check whether it is caused by Loss_RS.
is there another good method for replacing the add operation?
I believe this is not caused by the upsample +
operation.
If you target at enhancement of such small degradation, you probably need to fine tune the degradation model and create similar training data as your inputs.
thank you. i will try some ablation studies to test the loss_RS. Also, I have tried lots of different degradation models to fit different scenarios. BTW, your work is excellent. Hope you can get a great publishment!
Thank you. In my opinion, Loss_RS might not be the reason, it mainly helps with severe degradation in my experiment. For such small degradation, the key may lie in the discriminator.
Thank you, I will try these.
@Darrenxc do you implement the train code of psfrgan? can we chat in privately?
Based on your PSFRGAN network, I tried to train my own model. Here are some questions about image artifacts. During the training process, there are some severe checkboard effects in the early training stage. although the effect will be reduced, it still appears on testing (Please zoon in to see details)
Do you obtain the same results during the early training stage? Whether this Loss_RS increases both image details and the checkboard effect? Thank you