yulunzhang / RDN

Torch code for our CVPR 2018 paper "Residual Dense Network for Image Super-Resolution" (Spotlight)
543 stars 108 forks source link

Training Setting #16

Open Mmmofan opened 4 years ago

Mmmofan commented 4 years ago

Hi, your work was excellent and I'm tring to reimplement it as I can understand it deeperly.

I have one question that you said in your paper "Learning rate decreases half for every 200 epochs", and you trian RDN "takes 1 day with a Titan Xp GPU for 200 epochs", did that mean you didn't half down learning rate in whole training? Cause as far as I understand you just train your net for 200 epochs.

Btw, do you think DATA AUGMENTATION is necessary for SR tasks? if input always a patch, 800 images of DIV2K can produce enough patches for 200 iterations of 1000 steps per iteration

Looking forward your reply!

vircke commented 4 years ago

I have the same Question.In this paper "We randomly augment the patches by flipping horizontally or vertically and rotating 90◦.", add the original 800images, total =800x3+800=3200 images. An epoch contain 200 iterations (3200/16), minibatch=16. but " 1,000 iterations of back-propagation constitute an epoch.",said in this paper. One day ,I see a paper named "Dual-Path Recurrent Network for Image Super-Resolution",said in his Implementation Details," We randomly augment the patches by flipping horizontally or vertically and rotating 90◦. 200 iterations of back-propagation constitute an epoch." Image of Dual-Path Recurrent Network