Alibaba-AAIG / Beyond-ImageNet-Attack

Beyond imagenet attack (accepted by ICLR 2022) towards crafting adversarial examples for black-box domains.
MIT License
58 stars 9 forks source link

About the comparative methods #3

Closed lwmming closed 2 years ago

lwmming commented 2 years ago

Thank you for your insightful work! In Table3, I want to know that how to perform PGD or DIM on CUB with source models pretrained on ImageNet. Thank you~

qilong-zhang commented 2 years ago

Hi @lwmming, In this practical black-box scenario, the attacker cannot craft adversarial examples on a substitute model which is trained in the targeted domain. Therefore, we feed the images to the accessible ImageNet model and update the adversarial examples by the loss of that model.

lwmming commented 2 years ago

Hi @lwmming, In this practical black-box scenario, the attacker cannot craft adversarial examples on a substitute model which is trained in the targeted domain. Therefore, we feed the images to the accessible ImageNet model and update the adversarial examples by the loss of that model.

Hi @qilong-zhang , Thank you so much for your immediate reply. What kind of loss do you use to update the adversarial examples when performing PGD? Is it Eq. (7) in the paper?

qilong-zhang commented 2 years ago

Hi @lwmming, In this practical black-box scenario, the attacker cannot craft adversarial examples on a substitute model which is trained in the targeted domain. Therefore, we feed the images to the accessible ImageNet model and update the adversarial examples by the loss of that model.

Hi @qilong-zhang , Thank you so much for your immediate reply. What kind of loss do you use to update the adversarial examples when performing PGD? Is it Eq. (7) in the paper?

No, Eq. (7) is only for our BIA. For PGD and DIM, we adopt the default loss, i.e., cross-entropy loss.

lwmming commented 2 years ago

Hi @lwmming, In this practical black-box scenario, the attacker cannot craft adversarial examples on a substitute model which is trained in the targeted domain. Therefore, we feed the images to the accessible ImageNet model and update the adversarial examples by the loss of that model.

Hi @qilong-zhang , Thank you so much for your immediate reply. What kind of loss do you use to update the adversarial examples when performing PGD? Is it Eq. (7) in the paper?

No, Eq. (7) is only for our BIA. For PGD and DIM, we adopt the default loss, i.e., cross-entropy loss.

OK, but how to assign appropriate labels for images from CUB when adopting the cross-entropy loss to generate adversarial examples?

qilong-zhang commented 2 years ago

Hi @lwmming, In this practical black-box scenario, the attacker cannot craft adversarial examples on a substitute model which is trained in the targeted domain. Therefore, we feed the images to the accessible ImageNet model and update the adversarial examples by the loss of that model.

Hi @qilong-zhang , Thank you so much for your immediate reply. What kind of loss do you use to update the adversarial examples when performing PGD? Is it Eq. (7) in the paper?

No, Eq. (7) is only for our BIA. For PGD and DIM, we adopt the default loss, i.e., cross-entropy loss.

OK, but how to assign appropriate labels for images from CUB when adopting the cross-entropy loss to generate adversarial examples?

Using the predicted label of clean input as the true label.

lwmming commented 2 years ago

I understand. Thank you very much for your patient QA. Looking forward to your more excellent works~