Closed sdlpkxd closed 5 years ago
Hi, No, we report the results with 1000 epochs.
Got it, thanks for you reply. I also find you skip some batches when the currentErr ( loss ) is very large in training process, dose it mean that these batches are not suitable for training? I'm confused by this and could you please give me some explanation ? Thank you very much.
Hi,
This code is build on EDSR (Torch version). In general, we may not encounter such cases, where the error of some batches are much larger than those of their previous batches.
However, we still may encounter such batches that produce very large loss. For example, when you finetune network based on a pretrained model with larger learning rate, you may encounter such a case. In this case, we can just ignore them to further stabilize the training process. So, we could think that those batches are not suitable for training.
Ok, thank you very much.
Excuse me! Could you tell me what the mean of previous batches? The last batches or the formal batches?
Hi, thanks for you great work. I find nEpochs is set as 10000 in the opts.lua, does it mean that we need 10000 epochs to get the results in the paper ?