nmndeep / revisiting-at

[NeurIPS 2023] Code for the paper "Revisiting Adversarial Training for ImageNet: Architectures, Training and Generalization across Threat Models"
37 stars 3 forks source link

adversarial finetuning recipe for downstream datasets? #4

Closed HashmatShadab closed 1 year ago

HashmatShadab commented 1 year ago

Can you please provide in more detail the training recipe used for adversarial finetuning on downstream datasets(cifar10, cifar1000, flowers)? What optimizer, augmentations are used? Is the adversarial training done similar to how it is done on ImageNet or TRADES framework is used?

nmndeep commented 1 year ago

Hi, The main part of training recipe for downstream tasks can already be found in App. C.4. We use AdamW as optimizer and train with regular CE loss (similar to ImageNet training). We have never used TRADES in our work. Hope this helps.

HashmatShadab commented 1 year ago

Thank you!