okyksl / flow-lp

Code for "Semantic Perturbations with Normalizing Flows for Improved Generalization"
10 stars 0 forks source link

Some Questions about Paper #1

Closed ZifengDing closed 2 years ago

ZifengDing commented 3 years ago

Hi,

thanks for presenting this nice work!

I have two questions about the paper.

  1. Are the samples generated from both randomized latent attacks and adversarial latent attacks given the same label as the original image during training? Since you used "attack" in the paper, I am not sure if I am understanding right.
  2. It seems you can combine both randomized latent attacks and adversarial latent attacks together to do the data augmentation, i.e. using two methods concurrently. I saw in Table 8 you used multi-step training and it performs not the best. So what about we train the classifier with both of them at the same time?

Really looking forward to your reply! It is interesting to apply data augmentation in the latent space and I think will inspire a lot of following work.

Best, Zifeng

okyksl commented 3 years ago

Hi @Woolaowu,

Thanks for your interest.

1- We use unconditional normalizing flows in this work. That is, encoding and decoding operations do not depend on the class label. Therefore, the randomized latent attack is completely label-agnostic. For the adversarial latent attack, one needs to calculate classifier loss which requires class labels. However, it has been observed that adversarial training might suffer from a phenomenon called label leaking (in particular, with FGSM). I certainly do not have a good intuition about whether this is also applicable to our latent adversarial attacks. But, to be on the safe side, we used the traditional solution employed (e.g. Cleverhans) of using prediction of the model instead of ground truth label (link to the relevant code).

2- Certainly, one can use both attacks at the same time. And, I would love to hear about the results of any such experiment.