wzhouxiff / RestoreFormer

[CVPR 2022] RestoreFormer: High-Quality Blind Face Restoration from Undegraded Key-Value Pairs
Apache License 2.0
332 stars 35 forks source link

Training settings in the paper differ from the code #15

Open aapanaetov opened 1 year ago

aapanaetov commented 1 year ago

Hi! Thank you for publishing such an amazing work! In the paper you decay learning rate after 6e5 steps while in HQ_Dictionary.yaml it is set to 4e5 steps. Schedule steps, learning rate and loss weights in the configs for both HQ dictionary and RestoreFormer are different from the paper. Which settings should I use to reproduce your excellent results?

wzhouxiff commented 1 year ago

Please follow the setting described in the paper. Note that the learning rate set in the config is not the actual learning rate. It will be divided by the number of gpus used. The learning rate described in the paper is the one after dividing.