In your paper you say that "The training process is divided into two stages. First, we train a PSNR-oriented model with the L1 loss. The learning rate is initialized as 2×10^−4 and decayed by a factor of 2 every 2×10^5 of mini-batch updates."
Can you please show an example of an image output by the generator, along with the real image, after this first stage is completed? It'd really help me know if I'm on the right track.
The number of mini-batches to train for in this stage 1. Is it just enough to get the PSNR to level off?
Slightly unrelated. The generation of low resolution images uses the 'bicubic' method. Does ESRGAN only work for this method, or the algorithm doesn't really depend on the way with which the LR images are generated?
In your paper you say that "The training process is divided into two stages. First, we train a PSNR-oriented model with the L1 loss. The learning rate is initialized as 2×10^−4 and decayed by a factor of 2 every 2×10^5 of mini-batch updates."
Can you please show an example of an image output by the generator, along with the real image, after this first stage is completed? It'd really help me know if I'm on the right track.
The number of mini-batches to train for in this stage 1. Is it just enough to get the PSNR to level off?
Slightly unrelated. The generation of low resolution images uses the 'bicubic' method. Does ESRGAN only work for this method, or the algorithm doesn't really depend on the way with which the LR images are generated?
Thanks