Closed ghost closed 5 years ago
waifu2x uses Fully Convolutional Networks (FCN). FCN that consists only of convolution layers just applies various filters to input image, so it can handle variable inputs.
I read your code then I found that you extract some patches from original image as inputs when tranning. I also found some other data processing method such as "array_to_wand" and "wand_to_array" in "iproc.py".I don't know why you convert data format at this.
Original waifu2x uses GraphicksMagick that is a fork of ImageMagick to generate training data. So waifu2x-chainer uses Wand that is simple ImageMagick binding to reproduce the same result.
Oh,thanks,I an reading and testing the code then I understand the data-processing ,Now I have some trouble with ClippedWeightedHuberLoss(loss function),I can understand the forward function but I can‘t understand the backward function,so can you give me some inspiration。(My english is very poor,233)
I learn about that the cnn can handle the variable input,but we want get the output sizes is different for different inputs.What can I do?