Closed aknirala closed 1 year ago
Edit: Also robust accuracy for imagenet_linf_8 seems to be lower than that of imagenet_linf_4 for similar perturbation. For imagenet_linf_8, for 4/255 perturbation it is just 5%.
Shall I only evaluate them on restricted ImageNet?
Edit2: It seems the issue has to do something with the normalization of images. I loaded ImageNet following: https://towardsdatascience.com/downloading-and-using-the-imagenet-dataset-with-pytorch-f0908437c4be
If I use the specified mean and std, I get good natural accuracy, but images are not in range of [0, 1], so I am not sure how to enforce the l_inf norm constraint. If I remove normalization in transform, images are in range [0, 1], but their natural accuracy becomes very low.
I found that one can scale the perturbation as per mu and std used for the dataset (of course should have thought of that). I am also able to reproduce the mentioned accuracies. So closing the issue.
Are the two robust model for ImageNet Linf-norm (ResNet50), trained on entire ImageNet with 4/255 and 8/255 robustness? Or are they only trained on restricted ImageNet?
When I try a PGD20 attack, with step size = 0.003, on them I get much lower accuracy (tried few 1000 samples). imagenet_linf_4 with epsilon = 4/255 I get around 15% (instead of 33%) imagenet_linf_8 with epsilon = 8/255 I get around 5% (instead of 19%)
Their natural accuracy however is same as what is mentioned (tested on entire validation set).
Here is how I load the weights:
Am I doing something incorrect?