Closed clee-jaist closed 1 year ago
- Please refer to Sec4 in the original paper for details.
- What specifically do you mean by "the LR author"?
Thank you for your reply. I will check the PGD paper. And LR author means the author of label-consistent backdoor attacks.
- Please refer to Sec4 in the original paper for details.
- What specifically do you mean by "the LR author"?
Thank you for your reply. I will check the PGD paper. And LR author means the author of label-consistent backdoor attacks.
The original paper I mentioned is Label-Consistent Backdoor Attacks. In Sec4.3:
In fact, we will use perturbations based on adversarially trained models (Madry et al., 2018) since these perturbations are more likely to resemble the target class for large $\epsilon$.
It is more clearly now. I think I made a mistake before. Anyway, thank you so much for your help.
Hi author,
I am going to reproduce this repository. But, I would like to why need to 'Train an Adversarially Robust Model' first. Why not use the original resnet and PGD attack to generate attacked images? Because I found the LR author did like this.
Thank you so much.