JZ-LIANG / Ensemble-Adversarial-Training

Pytorch code for ens_adv_train
16 stars 4 forks source link

About randomly selecting a pretrained model at each iteration #1

Closed linzzzzzz closed 4 years ago

linzzzzzz commented 4 years ago

Thank you for having this well written and organized pytorch code!

Comparing your implementation against the official one, it looks to me at each iteration you are randomly choosing one pretrained model and generate adv examples, while the official repo loop through all pretrained models?

Not a big deal but just curious is there any specific reason you made this tweak?

Thanks.

JZ-LIANG commented 4 years ago

hello ! sorry for reply so late. i though it was just one of the variance to implement the Ensemble Adversarial Training In the view of the whole training procedure, the trained model got information from all adv_generators in both ways.

i choose in this way for Saving Computation, and in my experiments it gave me approximately as same level of adversarial robustness as when i used multiple (3, for precise) generators together. But i have not done further experiment to see if use more generators in one iteration would be better or not.(if that is your point)

here is one other work using quite similar scheme (use different generators for different iteration). it is not for adv defense but for generator better attacking. however idea look the same for me: to find out the shorter (or better) direction to cross the decision boundary in the input space. a better direction could be used to form a better attacking, which could also be use to train a better defender.

linzzzzzz commented 4 years ago

Thanks for the response. It makes sense and I will close this.