Open ghost opened 3 years ago
In https://github.com/mit-han-lab/data-efficient-gans/blob/master/DiffAugment-stylegan2/training/training_loop.py#L92
it says:
# Learning rate. s.G_lrate = G_lrate_dict.get(s.resolution, G_lrate_base) s.D_lrate = D_lrate_dict.get(s.resolution, D_lrate_base) if lrate_rampup_kimg > 0: rampup = min(s.kimg / lrate_rampup_kimg, 1.0) s.G_lrate *= rampup s.D_lrate *= rampup
if I test this with:
for lrate_rampup_kimg in [0.1, 0.5, 1.0, 1.5, 2.0, 100.0, 200.0, 300.0, 1000.0, 300.0, 3000.0]: # Learning rate. G_lrate = 0.002 D_lrate = 0.002 for kimg in range(1,300): if lrate_rampup_kimg > 0: rampup = min(kimg / lrate_rampup_kimg, 1.0) G_lrate *= rampup D_lrate *= rampup print('rampup:', lrate_rampup_kimg, 'final_lr:', G_lrate)
I get:
rampup: 0.1 final_lr: 0.002 rampup: 0.5 final_lr: 0.002 rampup: 1.0 final_lr: 0.002 rampup: 1.5 final_lr: 0.0013333333333333333 rampup: 2.0 final_lr: 0.001 rampup: 100.0 final_lr: 1.8665243088788857e-45 rampup: 200.0 final_lr: 9.815659915232974e-89 rampup: 300.0 final_lr: 4.471534887652881e-132 rampup: 1000.0 final_lr: 2.0403834147762724e-288 rampup: 300.0 final_lr: 4.471534887652881e-132 rampup: 3000.0 final_lr: 0.0
Can someone please explain to how this is rampup code?
In https://github.com/mit-han-lab/data-efficient-gans/blob/master/DiffAugment-stylegan2/training/training_loop.py#L92
it says:
if I test this with:
I get:
Can someone please explain to how this is rampup code?