greatlog / DAN

This is an official implementation of Unfolding the Alternating Optimization for Blind Super Resolution
231 stars 41 forks source link

Information about the loss #36

Closed claroche-gpfw-zz closed 2 years ago

claroche-gpfw-zz commented 2 years ago

Thanks a lot for the code.

I have some questions on this part of the code that appears both in v1 and v2: https://github.com/greatlog/DAN/blob/53f9b7010ba9ee662b2abb0b33c8df3939829a9f/codes/config/DANv2/models/blind_model.py#L148-L162

It seems that your loss is the combination of SR loss (contained in d_sr) + kernel loss (contained in d_ker) but only for the last step of the Restorer and Estimator since the lines "total_loss += d_ker" and "total_loss += d_sr" are outside the loop. It leads me to two questions:

Thanks a lot, Charles

greatlog commented 2 years ago

I have tried to supervise the network at every iter. But it makes the training process very unstable. Thus, I keep only the supervision at the final iteration.