yulunzhang / RDN

Torch code for our CVPR 2018 paper "Residual Dense Network for Image Super-Resolution" (Spotlight)
543 stars 108 forks source link

epochs in trainning #7

Closed sdlpkxd closed 5 years ago

sdlpkxd commented 5 years ago

Hi, thanks for you great work. I find nEpochs is set as 10000 in the opts.lua, does it mean that we need 10000 epochs to get the results in the paper ?

yulunzhang commented 5 years ago

Hi, No, we report the results with 1000 epochs.

sdlpkxd commented 5 years ago

Got it, thanks for you reply. I also find you skip some batches when the currentErr ( loss ) is very large in training process, dose it mean that these batches are not suitable for training? I'm confused by this and could you please give me some explanation ? Thank you very much.

yulunzhang commented 5 years ago

Hi,

This code is build on EDSR (Torch version). In general, we may not encounter such cases, where the error of some batches are much larger than those of their previous batches.

However, we still may encounter such batches that produce very large loss. For example, when you finetune network based on a pretrained model with larger learning rate, you may encounter such a case. In this case, we can just ignore them to further stabilize the training process. So, we could think that those batches are not suitable for training.

sdlpkxd commented 5 years ago

Ok, thank you very much.

ch135 commented 4 years ago

Excuse me! Could you tell me what the mean of previous batches? The last batches or the formal batches?