Open cassidylaidlaw opened 4 years ago
Apologies for the delayed response, and thanks for pointing this out. For ImageNet, we use targeted attacks to account for the fact that some classes are extremely similar, but as you point out this is less suitable for CIFAR-10. We are looking into how to correct this (subject to compute availability).
Here is a set of images generated by your
elastic
attack on a random sample of CIFAR-10 images against a robust model at eps4 (1):I have no idea what most of these images are. In the cases where some images are recognizable, they have been moved into a different class; for instance, the three "frogs" along the bottom center were two dogs and a horse originally. It seems unreasonable to try and evaluate against such an attack, and you also include two attacks with even greater bounds (eps5 and eps6).
Do you think your methodology is reasonable here? I was hoping to use your UAR score to do some evaluation for a project I'm working on but the bounds for the
elastic
attack seem too big. The other attacks' bounds seem more reasonable.