Closed zhangyunming closed 3 years ago
Thank you for interest in our work.
The learning rate fixes at g_lr = 0.0001, d_lr = 0.0004
. We did not experiment other policy because this simple strategy is already good enough.
You may try other options to see if the performance can be improved further.
total_epochs is 50 , and lr_decay_iters is 50 , it means the lr is not change in trainning process? all be 0.0002?
if i do not use 'step', --lr_policy', type=str, default='step', help='learning rate policy. [linear | step | plateau | cosine]' which one is better?