Closed ZifengDing closed 2 years ago
Hi @Woolaowu,
Thanks for your interest.
1- We use unconditional normalizing flows in this work. That is, encoding and decoding operations do not depend on the class label. Therefore, the randomized latent attack is completely label-agnostic. For the adversarial latent attack, one needs to calculate classifier loss which requires class labels. However, it has been observed that adversarial training might suffer from a phenomenon called label leaking (in particular, with FGSM). I certainly do not have a good intuition about whether this is also applicable to our latent adversarial attacks. But, to be on the safe side, we used the traditional solution employed (e.g. Cleverhans) of using prediction of the model instead of ground truth label (link to the relevant code).
2- Certainly, one can use both attacks at the same time. And, I would love to hear about the results of any such experiment.
Hi,
thanks for presenting this nice work!
I have two questions about the paper.
Really looking forward to your reply! It is interesting to apply data augmentation in the latent space and I think will inspire a lot of following work.
Best, Zifeng