Closed Feihong-cc closed 3 years ago
Yes, it does~
oh,but I saw the requires_grad = False of weight_h and weight_v written in the /model/SPSR_model.py code, why is that?
We use a convolutional layer with fixed kernels to extract gradient maps, which is the reason why we set require_grad=False for weight_h and weight_v. However, the parameters of the gradient branch are updated by back-propagation as normal.
Therefore, in the process of back propagation, the pixel value of image is optimized rather than the weight of the filter?
Yes.
Thank you very much! I also want to know that the gradient branch is passed into the SR branch using torch.cat((grad,sr),dim=1), what is the difference between this (grad+sr)?
The former is concatenation in dimensions while the latter is addition in values.
oh, no~ I mean to use torch.cat((grad,sr),dim=1) and (grad+sr) and then send it to the convolutional layer. What is the difference?
I think concatenation is better since it may contain the situation of addition. However, I think both are reasonable and you can have a try to see which is better.
Great! thank you for your reply !My idea is that the gradient contains the high frequency information of the image. After the gradient branch completed, we should be added the output gradient to SR, which is used the (grad+SR) operation, to enhance the high frequency information of sr , so restoring the sharpness of the image. As you said, the concatenation may contain the situation of addition, it's okay to use the concatenation.
:-)
@Florrie-Giu did you try with addition (grad+SR), I was also thinking in a similar direction. Adding will increase the high freq details and thus improve the sharpness
Thanks for your work! I am confused , does the gradient branch participate in backpropagation?