Dear author:
When I use your code to train EDSR and my own Network, I always meet with a prolem:
The Loss value and the PSNR that is estimated with all the benchmark datasets(U100,B100,Set5,Set14) will suffer from a sudden deterioration, for example:
[Epoch 26] Learning rate: 1.00e-4
[1600/16000] [L1: 5.9633] 65.8+0.4s
[3200/16000] [L1: 6.0812] 94.4+0.1s
[4800/16000] [L1: 5.9816] 65.4+0.1s
[6400/16000] [L1: 5.9686] 65.9+0.1s
[8000/16000] [L1: 5.9985] 65.5+0.1s
[9600/16000] [L1: 6.0579] 66.2+0.1s
[11200/16000] [L1: 2948.0002] 65.9+0.1s
[12800/16000] [L1: 3150.1106] 65.8+0.1s
[14400/16000] [L1: 3019.9456] 65.6+0.1s
[16000/16000] [L1: 2876.2886] 65.5+0.1s
And since I have done several experiments, the fact that when would this happen is not constant, sometimes just at the begining of the training process but sometimes it happens at the end of training process or some other epoches. However, as the training process goes on, if the number of epoches are just set to a large scale, the values of Loss and PSNR will come back to a normal level and but a little lower than the best values which are estimated before the deterioration.
At first, I think maybe there is something wrong with the training dataset DIV2K I download, but then I re-download it and the same phenomenon just happens.
So how can I tackle this problem?
Thank you all the same!
Dear author: When I use your code to train EDSR and my own Network, I always meet with a prolem: The Loss value and the PSNR that is estimated with all the benchmark datasets(U100,B100,Set5,Set14) will suffer from a sudden deterioration, for example: [Epoch 26] Learning rate: 1.00e-4 [1600/16000] [L1: 5.9633] 65.8+0.4s [3200/16000] [L1: 6.0812] 94.4+0.1s [4800/16000] [L1: 5.9816] 65.4+0.1s [6400/16000] [L1: 5.9686] 65.9+0.1s [8000/16000] [L1: 5.9985] 65.5+0.1s [9600/16000] [L1: 6.0579] 66.2+0.1s [11200/16000] [L1: 2948.0002] 65.9+0.1s [12800/16000] [L1: 3150.1106] 65.8+0.1s [14400/16000] [L1: 3019.9456] 65.6+0.1s [16000/16000] [L1: 2876.2886] 65.5+0.1s And since I have done several experiments, the fact that when would this happen is not constant, sometimes just at the begining of the training process but sometimes it happens at the end of training process or some other epoches. However, as the training process goes on, if the number of epoches are just set to a large scale, the values of Loss and PSNR will come back to a normal level and but a little lower than the best values which are estimated before the deterioration. At first, I think maybe there is something wrong with the training dataset DIV2K I download, but then I re-download it and the same phenomenon just happens. So how can I tackle this problem? Thank you all the same!