vtddggg / CAA

The implementation of our paper: Composite Adversarial Attacks (AAAI2021)
https://arxiv.org/abs/2012.05434
Apache License 2.0
30 stars 3 forks source link

Question about the CAA code #5

Closed Jialiang14 closed 2 years ago

Jialiang14 commented 2 years ago

Hello, I am very interested in your work. May I ask a question about the test_attacker.py? When the idx does not equal to zero, the original test images are needed to fed to the subsequent attack?

ori_adv_images, _ = apply_attacker(test_images, attack_name, test_labels, model, attack_eps, None, int(attack_steps), args.max_epsilon, _type=args.norm, gpu_idx=0, target=target_label) adv_adv_images, p = apply_attacker(subpolicy_out_dict[idx-1], attack_name, test_labels, model, attack_eps, previous_p, int(attack_steps), args.max_epsilon, _type=args.norm, gpu_idx=0, target=target_label)

vtddggg commented 2 years ago

yes

Jialiang14 commented 2 years ago

Thank you for your reply very much! Take subpolicy_linf = [{'attacker': 'PGD_Attack_adaptive_stepsize', 'magnitude': 8/255, 'step': 100}, {'attacker': 'MultiTargetedAttack', 'magnitude': 8/255, 'step': 100}] as an example, in your implementation, apart from the adversarial example created by subpolicy_linf, the final calculation of robust accuracy also includes the single attack 'MultiTargetedAttack'. Is my understanding correct?

vtddggg commented 2 years ago

Right, it's like what you think