Closed nalzok closed 4 years ago
In the ReadMe file, we have provided the running script which contains the hyper-parameters we adopt in our experiments and should achieve similar results as our provided model.
I see. Thanks for the reply!
When running train.py
with the provided arguments, we replaced --niter 500 --niter_decay 200
with --niter 100 --niter_decay 40
to reduce running time. However, even with one-fifth of the epochs, there appears to be a raise in multiple loss matrices towards the end (see log files). For example, pair_L1loss
went up from ~8 to ~9, and perceptual
went up from ~5 to ~6. Also, there doesn't appear to be an obvious decrease in other loss matrices. Initially, I was wondering whether the rest of the epochs are unnecessary.
Your pre-trained model does yield visually better results than the one we trained with fewer epochs, though, which means they are likely significant. This is kinda interesting.
Could you please provide the parameters so that we can reproduce it?