In many image restoration works like super resolution, denoising and so on, when I train the network ,I found after 1or2epoch ,the loss keeps an approximately stable value, Is this normal?
To my best understanding, if the network is designed well (i.e., effective), the loss would keep decreasing to achieve the local minimum, which usually needs more than 2 epochs.
In many image restoration works like super resolution, denoising and so on, when I train the network ,I found after 1or2epoch ,the loss keeps an approximately stable value, Is this normal?