Closed rowedenny closed 4 years ago
Hi,
In our tables baseline referes to a single model trained with cross-entropy loss (which we refer to as Softmax Models). We have trained the models so as to make the table of Pang et al. exactly comparable.
Hope this helps
Hi, I am trying to reproduce the result in table 8, and attack via cleverhans. The accuracy of the classifier (with the same architecture of table 1) is above 99, but the white-box attack accuracy of FGSM is 77.8 with eps=0.1 and 39.7 with eps=0.2. I also notice that the accuracy of PGD is 33.2 with eps=0.1 and 14.55 with eps=0.15. This is expected to be comparable to baseline in Table 8, which I assume means the model without any defense.
The way I compute the accuracy is:
Since table 8 & 9 is similar to the format in Pang's work, I specifically look through Pang's work, and I find that Table 2 in the NIPS paper says "Our baseline training method is to train the ensemble with the ECE loss". In short, the baseline in Pang's work means a simultaneously trained ensemble model.
So I am wondering: what the baseline means table 8 & 9. Is it a single model trained with cross-entropy?