Open alexriedel1 opened 2 years ago
Hello, why do we need to validate the adversarial examples during training? Are we using/reporting the best model instead of the last model??
We want to know if the model is improving (learning) during the training
I verified by myself and find that the PGD 7 ACC reaches its peak after the lr is dropped (probably at 100 epoch or 150 epoch). The final (at 200 epoch) model achieves a lower PGD 20 than the best model. I load the pre-trained model ResNet50 provided in this repo and it said the epoch is 152. It looks like they provide the best one, not the last model.
Hey, as an enhancement for custom model training, I propose to add a configuration argument so that the trainer does not evaluate after every epoch but a specification to validate every n epochs.
This would speed up the training process as the validation on adversarial examples can take quite a long time and maybe doesn't make sense in the early stages of a training.