Biqing-Qi / Exploring-Adversarial-Robustness-of-Deep-State-Space-Models

[NeurIPS 2024] Exploring Adversarial Robustness of Deep State Space Models
0 stars 0 forks source link

Handling Dataset #5

Open TalRub104 opened 1 month ago

TalRub104 commented 1 month ago

Hi, Some questions regarding your datasets handling:

1.Why didn’t you normalize the CIFAR-10 dataset in transform_train and transform_test?

  1. Why did you apply : transform_train = transforms.Compose([ transforms.RandomCrop(32, padding=4), transforms.RandomHorizontalFlip(), transforms.ToTensor(), ]) Instead of just applying: transform = transforms.Compose([transforms.ToTensor()])

For the MNIST case you didn't apply a similar transform (with crop size of 28).

  1. I noticed that you are performing x = x.view(B, H * W, C) inside your forward pass. Is there any particular reason you aren’t doing it outside when loading the data?

  2. why didn’t you add a normalization layer to the model when applying PGD or AA attacks? According to the authors of "Towards Evaluating the Robustness of Visual State Space Models", see MambaRobustness GitHub : The Normalize class is included here to enable backpropagation through the normalization process, as we need to compute gradients with respect to the input image for generating adversarial attacks. This technique is commonly employed in adversarial attack methods. Similar to training, the model processes a normalized version of the input image, but instead of normalizing during data preprocessing, we apply it directly within the model. For standard training or evaluation, there's no need to include the Normalize class inside the model, as normalization is handled in the data preprocessing pipeline, and computing gradients with respect to the input image isn't necessary.