Hi. I find that free adversarial training in original paper choose multistep lr.
I trained 96/8 epochs for free adversarial training with multistep lr with 1 GPU. I just got 40.8% acc for PGD-20 (eps=8). Then I trained 205/8->26 epochs for free adversarial training and I just got 42.01% acc for PGD-20 (eps=8). My initial lr is 0.1 and lr decays at [1/2 lr_steps, 3/4lr_steps]. The model is WRN34.
Could you please help me figure out what's wrong? I also find that cifar10_std = [0.2471, 0.2435, 0.2616] in your settings. Why not cifar10_std = [0.2023, 0.1994, 0.2010]?
Hi. I find that free adversarial training in original paper choose multistep lr. I trained 96/8 epochs for free adversarial training with multistep lr with 1 GPU. I just got 40.8% acc for PGD-20 (eps=8). Then I trained 205/8->26 epochs for free adversarial training and I just got 42.01% acc for PGD-20 (eps=8). My initial lr is 0.1 and lr decays at [1/2 lr_steps, 3/4lr_steps]. The model is WRN34. Could you please help me figure out what's wrong? I also find that cifar10_std = [0.2471, 0.2435, 0.2616] in your settings. Why not cifar10_std = [0.2023, 0.1994, 0.2010]?