cszn / SRMD

Learning a Single Convolutional Super-Resolution Network for Multiple Degradations (CVPR, 2018) (Matlab)
http://openaccess.thecvf.com/content_cvpr_2018/papers/Zhang_Learning_a_Single_CVPR_2018_paper.pdf
422 stars 80 forks source link

A little question about network trainning #18

Open cyfwry opened 4 years ago

cyfwry commented 4 years ago

Hello,I can not understand one sentence in your paper,'When the training error keeps unchanged in five sequential epochs, we merge the parameters of each batch normalization into the adjacent convolution filters'.I try to figure it out by reading your code but fail to deal with matlab...Thanks!

nanmehta commented 4 years ago

hii please help me out , what is the training dataset being used in this and where is the code written for generating kernels. I will be very thankful to you

cyfwry commented 4 years ago

hii please help me out , what is the training dataset being used in this and where is the code written for generating kernels. I will be very thankful to you

You can find the codes about your questions in SRMD/TrainingCodes/generatepatches.m and SRMD/TrainingCodes/Demo_Get_PCA_matrix.m. The training dataset used in here has been mentioned in paper, which is a mixture of other dataset with some preprocess.

marnie007 commented 3 years ago

I don’t understand the training process. The blur kernel is to determine the Gaussian kernel in a range (is this the meaning of the non-blind model?), and then stretch the dimension as the input with the LR image together, the LR image is still the result of bicubic interpolation, why Will there be better results by adding blur kernel?