It seems that your loss is the combination of SR loss (contained in d_sr) + kernel loss (contained in d_ker) but only for the last step of the Restorer and Estimator since the lines "total_loss += d_ker" and "total_loss += d_sr" are outside the loop.
It leads me to two questions:
Why are you keeping track of the d_sr and d_ker for the other steps.
Did you try to use the d_sr and d_ker for each step and is it giving bad results?
I have tried to supervise the network at every iter. But it makes the training process very unstable. Thus, I keep only the supervision at the final iteration.
Thanks a lot for the code.
I have some questions on this part of the code that appears both in v1 and v2: https://github.com/greatlog/DAN/blob/53f9b7010ba9ee662b2abb0b33c8df3939829a9f/codes/config/DANv2/models/blind_model.py#L148-L162
It seems that your loss is the combination of SR loss (contained in d_sr) + kernel loss (contained in d_ker) but only for the last step of the Restorer and Estimator since the lines "total_loss += d_ker" and "total_loss += d_sr" are outside the loop. It leads me to two questions:
Thanks a lot, Charles