cg563 / simple-blackbox-attack

Code for ICML 2019 paper "Simple Black-box Adversarial Attacks"
MIT License
191 stars 56 forks source link

A potential bug in SimBA class #22

Closed Yutong-Dai closed 2 years ago

Yutong-Dai commented 3 years ago

Thanks for your nice code.

I am wondering if there's a potential bug in the simba.py. Specifically, starting from line 107 to line 109.

My observation is as follows.

  1. For example at the k-th(0-indexed) iteration, all your adversarial examples successfully fool your network, i.e, early termination is triggered.
  2. In k-th iteration, I observed that succs[:, k] is all zeros. In contrast, I expect it to be a all-1 tensor.

The proposed fix is as follows.

probs[:, k-1:] = probs_k.unsqueeze(1).repeat(1, max_iters - k+1)
succs[:, k-1:] = torch.ones(batch_size, max_iters - k+1)
queries[:, k-1:] = torch.zeros(batch_size, max_iters - k+1)

This will address the concern, but it still looks weird.

Since I tested on my own trained model, so I do not provide the minimal reproducible code. But if you would like me to provide one, please let me know.

Thanks in advance.

cg563 commented 3 years ago

You're right that the iterations are 0-indexed, but so are all the arrays probs, succs and queries. Namely, probs[:, 0] contains the model's predicted probabilities at the first iteration (iteration 0). Does this make sense?