Closed lwmming closed 2 years ago
Hi @lwmming, In this practical black-box scenario, the attacker cannot craft adversarial examples on a substitute model which is trained in the targeted domain. Therefore, we feed the images to the accessible ImageNet model and update the adversarial examples by the loss of that model.
Hi @lwmming, In this practical black-box scenario, the attacker cannot craft adversarial examples on a substitute model which is trained in the targeted domain. Therefore, we feed the images to the accessible ImageNet model and update the adversarial examples by the loss of that model.
Hi @qilong-zhang , Thank you so much for your immediate reply. What kind of loss do you use to update the adversarial examples when performing PGD? Is it Eq. (7) in the paper?
Hi @lwmming, In this practical black-box scenario, the attacker cannot craft adversarial examples on a substitute model which is trained in the targeted domain. Therefore, we feed the images to the accessible ImageNet model and update the adversarial examples by the loss of that model.
Hi @qilong-zhang , Thank you so much for your immediate reply. What kind of loss do you use to update the adversarial examples when performing PGD? Is it Eq. (7) in the paper?
No, Eq. (7) is only for our BIA. For PGD and DIM, we adopt the default loss, i.e., cross-entropy loss.
Hi @lwmming, In this practical black-box scenario, the attacker cannot craft adversarial examples on a substitute model which is trained in the targeted domain. Therefore, we feed the images to the accessible ImageNet model and update the adversarial examples by the loss of that model.
Hi @qilong-zhang , Thank you so much for your immediate reply. What kind of loss do you use to update the adversarial examples when performing PGD? Is it Eq. (7) in the paper?
No, Eq. (7) is only for our BIA. For PGD and DIM, we adopt the default loss, i.e., cross-entropy loss.
OK, but how to assign appropriate labels for images from CUB when adopting the cross-entropy loss to generate adversarial examples?
Hi @lwmming, In this practical black-box scenario, the attacker cannot craft adversarial examples on a substitute model which is trained in the targeted domain. Therefore, we feed the images to the accessible ImageNet model and update the adversarial examples by the loss of that model.
Hi @qilong-zhang , Thank you so much for your immediate reply. What kind of loss do you use to update the adversarial examples when performing PGD? Is it Eq. (7) in the paper?
No, Eq. (7) is only for our BIA. For PGD and DIM, we adopt the default loss, i.e., cross-entropy loss.
OK, but how to assign appropriate labels for images from CUB when adopting the cross-entropy loss to generate adversarial examples?
Using the predicted label of clean input as the true label.
I understand. Thank you very much for your patient QA. Looking forward to your more excellent works~
Thank you for your insightful work! In Table3, I want to know that how to perform PGD or DIM on CUB with source models pretrained on ImageNet. Thank you~